2023-07-18 20:14:41,455 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99 2023-07-18 20:14:41,475 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-18 20:14:41,495 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 20:14:41,495 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23, deleteOnExit=true 2023-07-18 20:14:41,496 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 20:14:41,496 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/test.cache.data in system properties and HBase conf 2023-07-18 20:14:41,497 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 20:14:41,497 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir in system properties and HBase conf 2023-07-18 20:14:41,498 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 20:14:41,498 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 20:14:41,498 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 20:14:41,634 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-18 20:14:42,057 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 20:14:42,062 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 20:14:42,062 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 20:14:42,062 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 20:14:42,063 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 20:14:42,063 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 20:14:42,063 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 20:14:42,064 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 20:14:42,064 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 20:14:42,064 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 20:14:42,064 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/nfs.dump.dir in system properties and HBase conf 2023-07-18 20:14:42,065 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir in system properties and HBase conf 2023-07-18 20:14:42,065 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 20:14:42,065 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 20:14:42,065 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 20:14:42,604 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 20:14:42,609 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 20:14:42,923 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 20:14:43,106 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-18 20:14:43,122 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:14:43,166 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:14:43,212 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/Jetty_localhost_46453_hdfs____.pdgthi/webapp 2023-07-18 20:14:43,388 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46453 2023-07-18 20:14:43,401 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 20:14:43,402 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 20:14:43,892 WARN [Listener at localhost/37087] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:14:43,973 WARN [Listener at localhost/37087] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:14:43,992 WARN [Listener at localhost/37087] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:14:43,999 INFO [Listener at localhost/37087] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:14:44,003 INFO [Listener at localhost/37087] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/Jetty_localhost_42347_datanode____3n03of/webapp 2023-07-18 20:14:44,111 INFO [Listener at localhost/37087] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42347 2023-07-18 20:14:44,598 WARN [Listener at localhost/33173] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:14:44,678 WARN [Listener at localhost/33173] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:14:44,686 WARN [Listener at localhost/33173] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:14:44,689 INFO [Listener at localhost/33173] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:14:44,711 INFO [Listener at localhost/33173] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/Jetty_localhost_44955_datanode____.4df88w/webapp 2023-07-18 20:14:44,845 INFO [Listener at localhost/33173] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44955 2023-07-18 20:14:44,878 WARN [Listener at localhost/38921] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:14:44,933 WARN [Listener at localhost/38921] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:14:44,936 WARN [Listener at localhost/38921] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:14:44,938 INFO [Listener at localhost/38921] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:14:44,943 INFO [Listener at localhost/38921] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/Jetty_localhost_34095_datanode____.gp8pa/webapp 2023-07-18 20:14:45,089 INFO [Listener at localhost/38921] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34095 2023-07-18 20:14:45,158 WARN [Listener at localhost/39395] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:14:45,279 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2b610d8a42dd67a: Processing first storage report for DS-904a127d-cae0-4246-b3ff-e88ccf67c32e from datanode 3dfbad27-7337-496c-97d4-b22e27464a76 2023-07-18 20:14:45,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2b610d8a42dd67a: from storage DS-904a127d-cae0-4246-b3ff-e88ccf67c32e node DatanodeRegistration(127.0.0.1:40715, datanodeUuid=3dfbad27-7337-496c-97d4-b22e27464a76, infoPort=46109, infoSecurePort=0, ipcPort=33173, storageInfo=lv=-57;cid=testClusterID;nsid=1853890273;c=1689711282686), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-18 20:14:45,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb19c2a54db9649d4: Processing first storage report for DS-ba8906de-792b-42d7-9fac-76f4e7644349 from datanode 617e6469-29c1-46fc-94da-cf04d48b2089 2023-07-18 20:14:45,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb19c2a54db9649d4: from storage DS-ba8906de-792b-42d7-9fac-76f4e7644349 node DatanodeRegistration(127.0.0.1:34903, datanodeUuid=617e6469-29c1-46fc-94da-cf04d48b2089, infoPort=43023, infoSecurePort=0, ipcPort=38921, storageInfo=lv=-57;cid=testClusterID;nsid=1853890273;c=1689711282686), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:14:45,282 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2b610d8a42dd67a: Processing first storage report for DS-f6b934ac-0982-497e-889a-816ab352853d from datanode 3dfbad27-7337-496c-97d4-b22e27464a76 2023-07-18 20:14:45,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2b610d8a42dd67a: from storage DS-f6b934ac-0982-497e-889a-816ab352853d node DatanodeRegistration(127.0.0.1:40715, datanodeUuid=3dfbad27-7337-496c-97d4-b22e27464a76, infoPort=46109, infoSecurePort=0, ipcPort=33173, storageInfo=lv=-57;cid=testClusterID;nsid=1853890273;c=1689711282686), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:14:45,282 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb19c2a54db9649d4: Processing first storage report for DS-f5037cde-e5a4-4b07-b773-335b51b6c7bd from datanode 617e6469-29c1-46fc-94da-cf04d48b2089 2023-07-18 20:14:45,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb19c2a54db9649d4: from storage DS-f5037cde-e5a4-4b07-b773-335b51b6c7bd node DatanodeRegistration(127.0.0.1:34903, datanodeUuid=617e6469-29c1-46fc-94da-cf04d48b2089, infoPort=43023, infoSecurePort=0, ipcPort=38921, storageInfo=lv=-57;cid=testClusterID;nsid=1853890273;c=1689711282686), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:14:45,296 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb32783c61756347e: Processing first storage report for DS-31032bf6-54fd-47cd-a202-13b53ad166ad from datanode f4c230e2-7837-4991-a6ab-6c6804d76b91 2023-07-18 20:14:45,296 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb32783c61756347e: from storage DS-31032bf6-54fd-47cd-a202-13b53ad166ad node DatanodeRegistration(127.0.0.1:46743, datanodeUuid=f4c230e2-7837-4991-a6ab-6c6804d76b91, infoPort=45735, infoSecurePort=0, ipcPort=39395, storageInfo=lv=-57;cid=testClusterID;nsid=1853890273;c=1689711282686), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:14:45,296 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb32783c61756347e: Processing first storage report for DS-c0099d33-d55b-4476-9f4d-7f7d14dec028 from datanode f4c230e2-7837-4991-a6ab-6c6804d76b91 2023-07-18 20:14:45,296 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb32783c61756347e: from storage DS-c0099d33-d55b-4476-9f4d-7f7d14dec028 node DatanodeRegistration(127.0.0.1:46743, datanodeUuid=f4c230e2-7837-4991-a6ab-6c6804d76b91, infoPort=45735, infoSecurePort=0, ipcPort=39395, storageInfo=lv=-57;cid=testClusterID;nsid=1853890273;c=1689711282686), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:14:45,604 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99 2023-07-18 20:14:45,716 INFO [Listener at localhost/39395] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/zookeeper_0, clientPort=52937, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 20:14:45,734 INFO [Listener at localhost/39395] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52937 2023-07-18 20:14:45,745 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:45,748 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:46,436 INFO [Listener at localhost/39395] util.FSUtils(471): Created version file at hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67 with version=8 2023-07-18 20:14:46,436 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/hbase-staging 2023-07-18 20:14:46,445 DEBUG [Listener at localhost/39395] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 20:14:46,445 DEBUG [Listener at localhost/39395] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 20:14:46,445 DEBUG [Listener at localhost/39395] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 20:14:46,445 DEBUG [Listener at localhost/39395] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 20:14:46,832 INFO [Listener at localhost/39395] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-18 20:14:47,386 INFO [Listener at localhost/39395] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:14:47,426 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:47,426 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:47,427 INFO [Listener at localhost/39395] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:14:47,427 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:47,427 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:14:47,600 INFO [Listener at localhost/39395] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:14:47,704 DEBUG [Listener at localhost/39395] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-18 20:14:47,831 INFO [Listener at localhost/39395] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32929 2023-07-18 20:14:47,845 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:47,847 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:47,877 INFO [Listener at localhost/39395] zookeeper.RecoverableZooKeeper(93): Process identifier=master:32929 connecting to ZooKeeper ensemble=127.0.0.1:52937 2023-07-18 20:14:47,924 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:329290x0, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:14:47,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:32929-0x1017a1298a70000 connected 2023-07-18 20:14:47,959 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:14:47,960 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:14:47,969 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:14:47,980 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32929 2023-07-18 20:14:47,981 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32929 2023-07-18 20:14:47,982 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32929 2023-07-18 20:14:47,985 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32929 2023-07-18 20:14:47,986 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32929 2023-07-18 20:14:48,023 INFO [Listener at localhost/39395] log.Log(170): Logging initialized @7313ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-18 20:14:48,165 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:14:48,166 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:14:48,167 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:14:48,169 INFO [Listener at localhost/39395] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 20:14:48,169 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:14:48,169 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:14:48,173 INFO [Listener at localhost/39395] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:14:48,233 INFO [Listener at localhost/39395] http.HttpServer(1146): Jetty bound to port 36101 2023-07-18 20:14:48,236 INFO [Listener at localhost/39395] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:14:48,264 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,267 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34048a3a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:14:48,268 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,268 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@56d79499{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:14:48,450 INFO [Listener at localhost/39395] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:14:48,462 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:14:48,463 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:14:48,464 INFO [Listener at localhost/39395] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 20:14:48,471 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,497 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c53ffcd{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/jetty-0_0_0_0-36101-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1313585193896599477/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 20:14:48,510 INFO [Listener at localhost/39395] server.AbstractConnector(333): Started ServerConnector@604d87c0{HTTP/1.1, (http/1.1)}{0.0.0.0:36101} 2023-07-18 20:14:48,510 INFO [Listener at localhost/39395] server.Server(415): Started @7800ms 2023-07-18 20:14:48,513 INFO [Listener at localhost/39395] master.HMaster(444): hbase.rootdir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67, hbase.cluster.distributed=false 2023-07-18 20:14:48,586 INFO [Listener at localhost/39395] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:14:48,587 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,587 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,587 INFO [Listener at localhost/39395] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:14:48,587 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,587 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:14:48,593 INFO [Listener at localhost/39395] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:14:48,596 INFO [Listener at localhost/39395] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37953 2023-07-18 20:14:48,598 INFO [Listener at localhost/39395] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:14:48,605 DEBUG [Listener at localhost/39395] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:14:48,606 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:48,608 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:48,610 INFO [Listener at localhost/39395] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37953 connecting to ZooKeeper ensemble=127.0.0.1:52937 2023-07-18 20:14:48,613 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:379530x0, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:14:48,615 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37953-0x1017a1298a70001 connected 2023-07-18 20:14:48,615 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:14:48,617 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:14:48,618 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:14:48,618 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37953 2023-07-18 20:14:48,618 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37953 2023-07-18 20:14:48,619 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37953 2023-07-18 20:14:48,620 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37953 2023-07-18 20:14:48,620 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37953 2023-07-18 20:14:48,623 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:14:48,623 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:14:48,623 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:14:48,624 INFO [Listener at localhost/39395] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:14:48,624 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:14:48,624 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:14:48,625 INFO [Listener at localhost/39395] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:14:48,627 INFO [Listener at localhost/39395] http.HttpServer(1146): Jetty bound to port 41581 2023-07-18 20:14:48,627 INFO [Listener at localhost/39395] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:14:48,631 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,631 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:14:48,631 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,631 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c80b18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:14:48,752 INFO [Listener at localhost/39395] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:14:48,754 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:14:48,754 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:14:48,754 INFO [Listener at localhost/39395] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:14:48,756 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,760 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40f2000c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/jetty-0_0_0_0-41581-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3285114213153130311/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:14:48,762 INFO [Listener at localhost/39395] server.AbstractConnector(333): Started ServerConnector@22a36f37{HTTP/1.1, (http/1.1)}{0.0.0.0:41581} 2023-07-18 20:14:48,762 INFO [Listener at localhost/39395] server.Server(415): Started @8051ms 2023-07-18 20:14:48,774 INFO [Listener at localhost/39395] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:14:48,774 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,775 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,775 INFO [Listener at localhost/39395] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:14:48,775 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,775 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:14:48,775 INFO [Listener at localhost/39395] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:14:48,777 INFO [Listener at localhost/39395] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43019 2023-07-18 20:14:48,777 INFO [Listener at localhost/39395] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:14:48,778 DEBUG [Listener at localhost/39395] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:14:48,779 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:48,780 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:48,781 INFO [Listener at localhost/39395] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43019 connecting to ZooKeeper ensemble=127.0.0.1:52937 2023-07-18 20:14:48,785 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:430190x0, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:14:48,786 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43019-0x1017a1298a70002 connected 2023-07-18 20:14:48,786 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:14:48,787 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:14:48,788 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:14:48,789 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43019 2023-07-18 20:14:48,789 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43019 2023-07-18 20:14:48,789 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43019 2023-07-18 20:14:48,789 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43019 2023-07-18 20:14:48,790 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43019 2023-07-18 20:14:48,792 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:14:48,792 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:14:48,792 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:14:48,793 INFO [Listener at localhost/39395] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:14:48,793 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:14:48,793 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:14:48,793 INFO [Listener at localhost/39395] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:14:48,794 INFO [Listener at localhost/39395] http.HttpServer(1146): Jetty bound to port 38865 2023-07-18 20:14:48,794 INFO [Listener at localhost/39395] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:14:48,797 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,797 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:14:48,798 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,798 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46770358{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:14:48,922 INFO [Listener at localhost/39395] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:14:48,924 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:14:48,924 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:14:48,925 INFO [Listener at localhost/39395] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:14:48,926 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,927 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@67816748{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/jetty-0_0_0_0-38865-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8798753551099319393/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:14:48,929 INFO [Listener at localhost/39395] server.AbstractConnector(333): Started ServerConnector@1ba8dae2{HTTP/1.1, (http/1.1)}{0.0.0.0:38865} 2023-07-18 20:14:48,929 INFO [Listener at localhost/39395] server.Server(415): Started @8219ms 2023-07-18 20:14:48,944 INFO [Listener at localhost/39395] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:14:48,944 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,944 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,945 INFO [Listener at localhost/39395] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:14:48,945 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:48,945 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:14:48,945 INFO [Listener at localhost/39395] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:14:48,948 INFO [Listener at localhost/39395] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41243 2023-07-18 20:14:48,948 INFO [Listener at localhost/39395] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:14:48,952 DEBUG [Listener at localhost/39395] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:14:48,953 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:48,955 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:48,956 INFO [Listener at localhost/39395] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41243 connecting to ZooKeeper ensemble=127.0.0.1:52937 2023-07-18 20:14:48,960 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:412430x0, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:14:48,961 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:412430x0, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:14:48,962 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:412430x0, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:14:48,963 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:412430x0, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:14:48,964 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41243-0x1017a1298a70003 connected 2023-07-18 20:14:48,965 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41243 2023-07-18 20:14:48,965 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41243 2023-07-18 20:14:48,965 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41243 2023-07-18 20:14:48,967 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41243 2023-07-18 20:14:48,967 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41243 2023-07-18 20:14:48,970 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:14:48,970 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:14:48,970 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:14:48,970 INFO [Listener at localhost/39395] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:14:48,971 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:14:48,971 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:14:48,971 INFO [Listener at localhost/39395] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:14:48,972 INFO [Listener at localhost/39395] http.HttpServer(1146): Jetty bound to port 45299 2023-07-18 20:14:48,972 INFO [Listener at localhost/39395] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:14:48,979 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,979 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:14:48,980 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:48,980 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7431440f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:14:49,107 INFO [Listener at localhost/39395] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:14:49,108 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:14:49,108 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:14:49,108 INFO [Listener at localhost/39395] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:14:49,109 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:49,110 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38a30dd7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/jetty-0_0_0_0-45299-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6360894480260684091/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:14:49,111 INFO [Listener at localhost/39395] server.AbstractConnector(333): Started ServerConnector@16240f3c{HTTP/1.1, (http/1.1)}{0.0.0.0:45299} 2023-07-18 20:14:49,112 INFO [Listener at localhost/39395] server.Server(415): Started @8401ms 2023-07-18 20:14:49,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:14:49,122 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6965f1fe{HTTP/1.1, (http/1.1)}{0.0.0.0:41973} 2023-07-18 20:14:49,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8413ms 2023-07-18 20:14:49,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:49,133 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 20:14:49,135 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:49,160 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:14:49,160 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:14:49,161 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:49,163 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:14:49,165 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:14:49,165 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:14:49,168 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,32929,1689711286630 from backup master directory 2023-07-18 20:14:49,168 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:14:49,174 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:49,174 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 20:14:49,176 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:14:49,176 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:49,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-18 20:14:49,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-18 20:14:49,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/hbase.id with ID: ff814b3c-b719-44f5-94c5-5c1b8ad72222 2023-07-18 20:14:49,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:49,328 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:49,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2636192b to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:49,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@157a336b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:49,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:14:49,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 20:14:49,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-18 20:14:49,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-18 20:14:49,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 20:14:49,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 20:14:49,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:14:49,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store-tmp 2023-07-18 20:14:49,560 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:49,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 20:14:49,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:14:49,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:14:49,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 20:14:49,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:14:49,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:14:49,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:14:49,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/WALs/jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:49,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32929%2C1689711286630, suffix=, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/WALs/jenkins-hbase4.apache.org,32929,1689711286630, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/oldWALs, maxLogs=10 2023-07-18 20:14:49,656 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:14:49,656 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:14:49,656 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:14:49,664 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-18 20:14:49,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/WALs/jenkins-hbase4.apache.org,32929,1689711286630/jenkins-hbase4.apache.org%2C32929%2C1689711286630.1689711289605 2023-07-18 20:14:49,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK], DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK], DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK]] 2023-07-18 20:14:49,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:14:49,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:49,746 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:14:49,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:14:49,830 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:14:49,838 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 20:14:49,872 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 20:14:49,889 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:49,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:14:49,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:14:49,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:14:49,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:49,920 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9883885600, jitterRate=-0.07949142158031464}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:49,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:14:49,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 20:14:49,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 20:14:49,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 20:14:49,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 20:14:49,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-18 20:14:49,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 39 msec 2023-07-18 20:14:49,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 20:14:50,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 20:14:50,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 20:14:50,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 20:14:50,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 20:14:50,041 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 20:14:50,044 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:50,045 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 20:14:50,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 20:14:50,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 20:14:50,066 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:14:50,066 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:14:50,066 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:14:50,066 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:14:50,066 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:50,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,32929,1689711286630, sessionid=0x1017a1298a70000, setting cluster-up flag (Was=false) 2023-07-18 20:14:50,084 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:50,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 20:14:50,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:50,102 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:50,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 20:14:50,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:50,113 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.hbase-snapshot/.tmp 2023-07-18 20:14:50,121 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(951): ClusterId : ff814b3c-b719-44f5-94c5-5c1b8ad72222 2023-07-18 20:14:50,121 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(951): ClusterId : ff814b3c-b719-44f5-94c5-5c1b8ad72222 2023-07-18 20:14:50,125 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(951): ClusterId : ff814b3c-b719-44f5-94c5-5c1b8ad72222 2023-07-18 20:14:50,130 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:14:50,130 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:14:50,130 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:14:50,138 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:14:50,138 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:14:50,138 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:14:50,139 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:14:50,138 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:14:50,139 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:14:50,143 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:14:50,143 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:14:50,144 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:14:50,145 DEBUG [RS:1;jenkins-hbase4:43019] zookeeper.ReadOnlyZKClient(139): Connect 0x753f4dbc to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:50,145 DEBUG [RS:0;jenkins-hbase4:37953] zookeeper.ReadOnlyZKClient(139): Connect 0x233e4e6e to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:50,146 DEBUG [RS:2;jenkins-hbase4:41243] zookeeper.ReadOnlyZKClient(139): Connect 0x53b63469 to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:50,160 DEBUG [RS:0;jenkins-hbase4:37953] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@475781e8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:50,161 DEBUG [RS:0;jenkins-hbase4:37953] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@557a8ee9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:14:50,164 DEBUG [RS:1;jenkins-hbase4:43019] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74fb88b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:50,164 DEBUG [RS:1;jenkins-hbase4:43019] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@383d2416, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:14:50,166 DEBUG [RS:2;jenkins-hbase4:41243] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16b4a07b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:50,166 DEBUG [RS:2;jenkins-hbase4:41243] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d299323, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:14:50,197 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41243 2023-07-18 20:14:50,203 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43019 2023-07-18 20:14:50,207 INFO [RS:2;jenkins-hbase4:41243] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:14:50,207 INFO [RS:1;jenkins-hbase4:43019] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:14:50,219 INFO [RS:1;jenkins-hbase4:43019] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:14:50,215 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37953 2023-07-18 20:14:50,219 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:14:50,219 INFO [RS:2;jenkins-hbase4:41243] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:14:50,219 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:14:50,219 INFO [RS:0;jenkins-hbase4:37953] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:14:50,220 INFO [RS:0;jenkins-hbase4:37953] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:14:50,220 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:14:50,223 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:43019, startcode=1689711288774 2023-07-18 20:14:50,227 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:41243, startcode=1689711288943 2023-07-18 20:14:50,227 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:37953, startcode=1689711288586 2023-07-18 20:14:50,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 20:14:50,254 DEBUG [RS:0;jenkins-hbase4:37953] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:14:50,255 DEBUG [RS:2;jenkins-hbase4:41243] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:14:50,254 DEBUG [RS:1;jenkins-hbase4:43019] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:14:50,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 20:14:50,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:14:50,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 20:14:50,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 20:14:50,377 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53217, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:14:50,377 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52883, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:14:50,377 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59599, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:14:50,391 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:50,405 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:50,407 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:50,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 20:14:50,435 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 20:14:50,435 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 20:14:50,435 WARN [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 20:14:50,435 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 20:14:50,436 WARN [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 20:14:50,436 WARN [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 20:14:50,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 20:14:50,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 20:14:50,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 20:14:50,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:14:50,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689711320479 2023-07-18 20:14:50,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 20:14:50,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 20:14:50,488 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 20:14:50,489 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 20:14:50,492 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 20:14:50,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 20:14:50,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 20:14:50,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 20:14:50,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 20:14:50,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 20:14:50,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 20:14:50,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 20:14:50,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 20:14:50,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 20:14:50,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711290505,5,FailOnTimeoutGroup] 2023-07-18 20:14:50,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711290505,5,FailOnTimeoutGroup] 2023-07-18 20:14:50,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 20:14:50,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,507 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,537 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:41243, startcode=1689711288943 2023-07-18 20:14:50,537 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:43019, startcode=1689711288774 2023-07-18 20:14:50,537 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:37953, startcode=1689711288586 2023-07-18 20:14:50,543 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,544 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:14:50,545 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 20:14:50,549 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,549 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:14:50,550 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 20:14:50,550 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67 2023-07-18 20:14:50,550 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37087 2023-07-18 20:14:50,550 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36101 2023-07-18 20:14:50,551 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,551 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:14:50,551 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 20:14:50,552 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67 2023-07-18 20:14:50,553 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37087 2023-07-18 20:14:50,553 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36101 2023-07-18 20:14:50,554 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67 2023-07-18 20:14:50,554 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37087 2023-07-18 20:14:50,554 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36101 2023-07-18 20:14:50,565 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:14:50,567 DEBUG [RS:2;jenkins-hbase4:41243] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,572 WARN [RS:2;jenkins-hbase4:41243] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:14:50,573 DEBUG [RS:0;jenkins-hbase4:37953] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,571 DEBUG [RS:1;jenkins-hbase4:43019] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,573 WARN [RS:0;jenkins-hbase4:37953] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:14:50,573 INFO [RS:2;jenkins-hbase4:41243] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:14:50,574 WARN [RS:1;jenkins-hbase4:43019] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:14:50,574 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,574 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37953,1689711288586] 2023-07-18 20:14:50,574 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41243,1689711288943] 2023-07-18 20:14:50,574 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43019,1689711288774] 2023-07-18 20:14:50,574 INFO [RS:0;jenkins-hbase4:37953] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:14:50,574 INFO [RS:1;jenkins-hbase4:43019] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:14:50,576 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,576 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,590 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 20:14:50,593 DEBUG [RS:2;jenkins-hbase4:41243] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,593 DEBUG [RS:1;jenkins-hbase4:43019] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,593 DEBUG [RS:0;jenkins-hbase4:37953] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,593 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 20:14:50,593 DEBUG [RS:2;jenkins-hbase4:41243] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,594 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67 2023-07-18 20:14:50,594 DEBUG [RS:1;jenkins-hbase4:43019] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,594 DEBUG [RS:0;jenkins-hbase4:37953] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,594 DEBUG [RS:2;jenkins-hbase4:41243] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,595 DEBUG [RS:0;jenkins-hbase4:37953] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,595 DEBUG [RS:1;jenkins-hbase4:43019] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,619 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:14:50,620 DEBUG [RS:0;jenkins-hbase4:37953] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:14:50,619 DEBUG [RS:2;jenkins-hbase4:41243] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:14:50,625 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:50,628 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:14:50,631 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info 2023-07-18 20:14:50,632 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:14:50,635 INFO [RS:1;jenkins-hbase4:43019] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:14:50,635 INFO [RS:0;jenkins-hbase4:37953] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:14:50,636 INFO [RS:2;jenkins-hbase4:41243] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:14:50,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:50,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:14:50,640 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:14:50,640 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:14:50,641 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:50,641 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:14:50,646 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table 2023-07-18 20:14:50,647 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:14:50,648 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:50,650 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740 2023-07-18 20:14:50,651 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740 2023-07-18 20:14:50,657 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:14:50,665 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:14:50,667 INFO [RS:2;jenkins-hbase4:41243] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:14:50,667 INFO [RS:0;jenkins-hbase4:37953] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:14:50,667 INFO [RS:1;jenkins-hbase4:43019] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:14:50,676 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:50,677 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11489728160, jitterRate=0.07006432116031647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:14:50,677 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:14:50,677 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:14:50,677 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:14:50,677 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:14:50,677 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:14:50,677 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:14:50,678 INFO [RS:1;jenkins-hbase4:43019] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:14:50,678 INFO [RS:2;jenkins-hbase4:41243] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:14:50,678 INFO [RS:0;jenkins-hbase4:37953] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:14:50,678 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,679 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,679 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,679 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:14:50,679 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:14:50,686 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 20:14:50,686 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 20:14:50,692 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:14:50,692 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:14:50,692 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:14:50,701 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 20:14:50,704 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,704 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,704 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,704 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,704 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:14:50,705 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:14:50,706 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:2;jenkins-hbase4:41243] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,705 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:0;jenkins-hbase4:37953] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,706 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:14:50,707 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,707 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,707 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,707 DEBUG [RS:1;jenkins-hbase4:43019] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:50,712 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,712 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,712 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,712 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,713 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,713 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,714 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,714 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,714 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,715 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 20:14:50,724 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 20:14:50,733 INFO [RS:2;jenkins-hbase4:41243] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:14:50,733 INFO [RS:0;jenkins-hbase4:37953] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:14:50,733 INFO [RS:1;jenkins-hbase4:43019] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:14:50,738 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43019,1689711288774-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,738 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37953,1689711288586-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,747 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41243,1689711288943-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:50,767 INFO [RS:0;jenkins-hbase4:37953] regionserver.Replication(203): jenkins-hbase4.apache.org,37953,1689711288586 started 2023-07-18 20:14:50,767 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37953,1689711288586, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37953, sessionid=0x1017a1298a70001 2023-07-18 20:14:50,767 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:14:50,767 DEBUG [RS:0;jenkins-hbase4:37953] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,767 DEBUG [RS:0;jenkins-hbase4:37953] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37953,1689711288586' 2023-07-18 20:14:50,768 DEBUG [RS:0;jenkins-hbase4:37953] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:14:50,768 DEBUG [RS:0;jenkins-hbase4:37953] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:14:50,769 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:14:50,769 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:14:50,769 DEBUG [RS:0;jenkins-hbase4:37953] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:50,770 DEBUG [RS:0;jenkins-hbase4:37953] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37953,1689711288586' 2023-07-18 20:14:50,770 DEBUG [RS:0;jenkins-hbase4:37953] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:14:50,770 DEBUG [RS:0;jenkins-hbase4:37953] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:14:50,770 INFO [RS:2;jenkins-hbase4:41243] regionserver.Replication(203): jenkins-hbase4.apache.org,41243,1689711288943 started 2023-07-18 20:14:50,770 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41243,1689711288943, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41243, sessionid=0x1017a1298a70003 2023-07-18 20:14:50,771 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:14:50,771 DEBUG [RS:2;jenkins-hbase4:41243] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,771 DEBUG [RS:2;jenkins-hbase4:41243] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41243,1689711288943' 2023-07-18 20:14:50,772 DEBUG [RS:2;jenkins-hbase4:41243] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:14:50,772 DEBUG [RS:0;jenkins-hbase4:37953] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:14:50,772 INFO [RS:0;jenkins-hbase4:37953] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:14:50,772 INFO [RS:0;jenkins-hbase4:37953] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:14:50,772 DEBUG [RS:2;jenkins-hbase4:41243] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:14:50,773 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:14:50,773 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:14:50,773 DEBUG [RS:2;jenkins-hbase4:41243] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:50,773 DEBUG [RS:2;jenkins-hbase4:41243] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41243,1689711288943' 2023-07-18 20:14:50,773 DEBUG [RS:2;jenkins-hbase4:41243] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:14:50,774 DEBUG [RS:2;jenkins-hbase4:41243] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:14:50,775 DEBUG [RS:2;jenkins-hbase4:41243] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:14:50,775 INFO [RS:2;jenkins-hbase4:41243] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:14:50,775 INFO [RS:2;jenkins-hbase4:41243] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:14:50,779 INFO [RS:1;jenkins-hbase4:43019] regionserver.Replication(203): jenkins-hbase4.apache.org,43019,1689711288774 started 2023-07-18 20:14:50,779 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43019,1689711288774, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43019, sessionid=0x1017a1298a70002 2023-07-18 20:14:50,779 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:14:50,779 DEBUG [RS:1;jenkins-hbase4:43019] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,779 DEBUG [RS:1;jenkins-hbase4:43019] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43019,1689711288774' 2023-07-18 20:14:50,779 DEBUG [RS:1;jenkins-hbase4:43019] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:14:50,780 DEBUG [RS:1;jenkins-hbase4:43019] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:14:50,780 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:14:50,780 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:14:50,780 DEBUG [RS:1;jenkins-hbase4:43019] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:50,781 DEBUG [RS:1;jenkins-hbase4:43019] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43019,1689711288774' 2023-07-18 20:14:50,781 DEBUG [RS:1;jenkins-hbase4:43019] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:14:50,781 DEBUG [RS:1;jenkins-hbase4:43019] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:14:50,782 DEBUG [RS:1;jenkins-hbase4:43019] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:14:50,782 INFO [RS:1;jenkins-hbase4:43019] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:14:50,782 INFO [RS:1;jenkins-hbase4:43019] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:14:50,876 DEBUG [jenkins-hbase4:32929] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 20:14:50,890 INFO [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43019%2C1689711288774, suffix=, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,43019,1689711288774, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs, maxLogs=32 2023-07-18 20:14:50,891 INFO [RS:2;jenkins-hbase4:41243] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41243%2C1689711288943, suffix=, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,41243,1689711288943, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs, maxLogs=32 2023-07-18 20:14:50,898 DEBUG [jenkins-hbase4:32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:50,900 DEBUG [jenkins-hbase4:32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:50,900 DEBUG [jenkins-hbase4:32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:50,900 DEBUG [jenkins-hbase4:32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:50,900 DEBUG [jenkins-hbase4:32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:50,898 INFO [RS:0;jenkins-hbase4:37953] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37953%2C1689711288586, suffix=, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,37953,1689711288586, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs, maxLogs=32 2023-07-18 20:14:50,922 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43019,1689711288774, state=OPENING 2023-07-18 20:14:50,936 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 20:14:50,939 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:14:50,940 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:14:50,940 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:14:50,946 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:50,946 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:14:50,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:50,962 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:14:50,965 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:14:50,965 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:14:50,966 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:14:50,967 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:14:50,967 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:14:50,974 INFO [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,43019,1689711288774/jenkins-hbase4.apache.org%2C43019%2C1689711288774.1689711290901 2023-07-18 20:14:50,982 DEBUG [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK], DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK], DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK]] 2023-07-18 20:14:50,987 INFO [RS:2;jenkins-hbase4:41243] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,41243,1689711288943/jenkins-hbase4.apache.org%2C41243%2C1689711288943.1689711290901 2023-07-18 20:14:50,988 DEBUG [RS:2;jenkins-hbase4:41243] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK], DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK], DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK]] 2023-07-18 20:14:50,989 INFO [RS:0;jenkins-hbase4:37953] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,37953,1689711288586/jenkins-hbase4.apache.org%2C37953%2C1689711288586.1689711290903 2023-07-18 20:14:50,996 DEBUG [RS:0;jenkins-hbase4:37953] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK], DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK], DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK]] 2023-07-18 20:14:51,088 WARN [ReadOnlyZKClient-127.0.0.1:52937@0x2636192b] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 20:14:51,154 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:51,165 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47694, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:51,171 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43019] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:47694 deadline: 1689711351171, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:51,179 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:51,187 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:51,194 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:51,216 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 20:14:51,216 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:14:51,220 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43019%2C1689711288774.meta, suffix=.meta, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,43019,1689711288774, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs, maxLogs=32 2023-07-18 20:14:51,249 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:14:51,249 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:14:51,251 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:14:51,259 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,43019,1689711288774/jenkins-hbase4.apache.org%2C43019%2C1689711288774.meta.1689711291222.meta 2023-07-18 20:14:51,259 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK], DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK], DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK]] 2023-07-18 20:14:51,260 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:14:51,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:14:51,264 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 20:14:51,266 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 20:14:51,271 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 20:14:51,271 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:51,271 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 20:14:51,272 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 20:14:51,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:14:51,276 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info 2023-07-18 20:14:51,276 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info 2023-07-18 20:14:51,277 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:14:51,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:51,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:14:51,279 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:14:51,279 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:14:51,280 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:14:51,280 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:51,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:14:51,288 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table 2023-07-18 20:14:51,288 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table 2023-07-18 20:14:51,289 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:14:51,289 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:51,298 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740 2023-07-18 20:14:51,303 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740 2023-07-18 20:14:51,307 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:14:51,309 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:14:51,311 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10648576800, jitterRate=-0.008274003863334656}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:14:51,311 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:14:51,335 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689711291174 2023-07-18 20:14:51,358 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 20:14:51,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 20:14:51,360 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43019,1689711288774, state=OPEN 2023-07-18 20:14:51,363 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 20:14:51,363 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:14:51,367 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 20:14:51,367 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43019,1689711288774 in 413 msec 2023-07-18 20:14:51,373 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 20:14:51,373 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 669 msec 2023-07-18 20:14:51,379 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0950 sec 2023-07-18 20:14:51,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689711291379, completionTime=-1 2023-07-18 20:14:51,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 20:14:51,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 20:14:51,437 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 20:14:51,438 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689711351438 2023-07-18 20:14:51,438 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689711411438 2023-07-18 20:14:51,438 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 58 msec 2023-07-18 20:14:51,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32929,1689711286630-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:51,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32929,1689711286630-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:51,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32929,1689711286630-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:51,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:32929, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:51,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:51,470 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 20:14:51,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 20:14:51,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 20:14:51,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 20:14:51,499 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:14:51,502 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:14:51,518 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,520 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 empty. 2023-07-18 20:14:51,521 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,521 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 20:14:51,578 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 20:14:51,580 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f1d06ae394b6dc19534084668df26a36, NAME => 'hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:51,600 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:51,600 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f1d06ae394b6dc19534084668df26a36, disabling compactions & flushes 2023-07-18 20:14:51,600 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:51,600 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:51,600 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. after waiting 0 ms 2023-07-18 20:14:51,600 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:51,600 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:51,600 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f1d06ae394b6dc19534084668df26a36: 2023-07-18 20:14:51,607 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:14:51,623 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711291610"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711291610"}]},"ts":"1689711291610"} 2023-07-18 20:14:51,655 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:14:51,657 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:14:51,661 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711291657"}]},"ts":"1689711291657"} 2023-07-18 20:14:51,665 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 20:14:51,671 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:51,671 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:51,671 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:51,671 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:51,671 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:51,674 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, ASSIGN}] 2023-07-18 20:14:51,678 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, ASSIGN 2023-07-18 20:14:51,681 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:14:51,702 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:14:51,704 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 20:14:51,707 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:14:51,710 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:14:51,714 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:51,715 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 empty. 2023-07-18 20:14:51,716 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:51,716 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 20:14:51,747 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 20:14:51,749 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 22291b453f6d085322d417bcf0fb99d8, NAME => 'hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:51,784 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:51,784 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 22291b453f6d085322d417bcf0fb99d8, disabling compactions & flushes 2023-07-18 20:14:51,784 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:51,784 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:51,784 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. after waiting 0 ms 2023-07-18 20:14:51,784 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:51,784 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:51,784 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 22291b453f6d085322d417bcf0fb99d8: 2023-07-18 20:14:51,789 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:14:51,791 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711291791"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711291791"}]},"ts":"1689711291791"} 2023-07-18 20:14:51,794 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:14:51,795 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:14:51,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711291795"}]},"ts":"1689711291795"} 2023-07-18 20:14:51,801 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 20:14:51,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:51,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:51,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:51,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:51,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:51,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, ASSIGN}] 2023-07-18 20:14:51,808 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, ASSIGN 2023-07-18 20:14:51,809 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:51,810 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 20:14:51,812 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f1d06ae394b6dc19534084668df26a36, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:51,812 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=22291b453f6d085322d417bcf0fb99d8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:51,812 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711291812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711291812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711291812"}]},"ts":"1689711291812"} 2023-07-18 20:14:51,812 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711291812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711291812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711291812"}]},"ts":"1689711291812"} 2023-07-18 20:14:51,815 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure f1d06ae394b6dc19534084668df26a36, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:51,817 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 22291b453f6d085322d417bcf0fb99d8, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:51,970 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:51,970 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:51,972 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:51,972 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:51,974 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37060, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:51,976 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:51,983 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:51,984 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f1d06ae394b6dc19534084668df26a36, NAME => 'hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:14:51,986 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:51,987 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 22291b453f6d085322d417bcf0fb99d8, NAME => 'hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:14:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:14:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. service=MultiRowMutationService 2023-07-18 20:14:51,988 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 20:14:51,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:51,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:51,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:51,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:51,991 INFO [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,994 DEBUG [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info 2023-07-18 20:14:51,994 DEBUG [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info 2023-07-18 20:14:51,994 INFO [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:51,994 INFO [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f1d06ae394b6dc19534084668df26a36 columnFamilyName info 2023-07-18 20:14:51,995 INFO [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] regionserver.HStore(310): Store=f1d06ae394b6dc19534084668df26a36/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:51,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:51,998 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:52,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:52,006 DEBUG [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m 2023-07-18 20:14:52,007 DEBUG [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m 2023-07-18 20:14:52,008 INFO [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 22291b453f6d085322d417bcf0fb99d8 columnFamilyName m 2023-07-18 20:14:52,010 INFO [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] regionserver.HStore(310): Store=22291b453f6d085322d417bcf0fb99d8/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:52,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:52,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:52,013 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f1d06ae394b6dc19534084668df26a36; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12039437600, jitterRate=0.12126000225543976}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:52,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f1d06ae394b6dc19534084668df26a36: 2023-07-18 20:14:52,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:52,018 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36., pid=8, masterSystemTime=1689711291970 2023-07-18 20:14:52,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:52,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:52,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:52,027 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f1d06ae394b6dc19534084668df26a36, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:52,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:52,027 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711292025"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711292025"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711292025"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711292025"}]},"ts":"1689711292025"} 2023-07-18 20:14:52,027 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 22291b453f6d085322d417bcf0fb99d8; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1f266893, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:52,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 22291b453f6d085322d417bcf0fb99d8: 2023-07-18 20:14:52,029 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8., pid=9, masterSystemTime=1689711291972 2023-07-18 20:14:52,033 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:52,034 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:52,035 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=22291b453f6d085322d417bcf0fb99d8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:52,036 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711292035"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711292035"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711292035"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711292035"}]},"ts":"1689711292035"} 2023-07-18 20:14:52,037 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-18 20:14:52,037 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure f1d06ae394b6dc19534084668df26a36, server=jenkins-hbase4.apache.org,37953,1689711288586 in 217 msec 2023-07-18 20:14:52,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 20:14:52,044 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, ASSIGN in 363 msec 2023-07-18 20:14:52,046 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:14:52,047 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 20:14:52,047 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 22291b453f6d085322d417bcf0fb99d8, server=jenkins-hbase4.apache.org,41243,1689711288943 in 222 msec 2023-07-18 20:14:52,047 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711292047"}]},"ts":"1689711292047"} 2023-07-18 20:14:52,050 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 20:14:52,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-18 20:14:52,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, ASSIGN in 242 msec 2023-07-18 20:14:52,052 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:14:52,053 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711292052"}]},"ts":"1689711292052"} 2023-07-18 20:14:52,054 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:14:52,055 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 20:14:52,057 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 567 msec 2023-07-18 20:14:52,059 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:14:52,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 357 msec 2023-07-18 20:14:52,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 20:14:52,101 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:14:52,101 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:52,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:52,137 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37064, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:52,137 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:52,141 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56300, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:52,145 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 20:14:52,145 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 20:14:52,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 20:14:52,184 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:14:52,190 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 44 msec 2023-07-18 20:14:52,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 20:14:52,208 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:14:52,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-07-18 20:14:52,220 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 20:14:52,224 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 20:14:52,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.048sec 2023-07-18 20:14:52,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 20:14:52,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 20:14:52,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 20:14:52,229 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:52,230 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:52,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32929,1689711286630-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 20:14:52,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32929,1689711286630-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 20:14:52,233 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:14:52,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 20:14:52,244 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 20:14:52,246 DEBUG [Listener at localhost/39395] zookeeper.ReadOnlyZKClient(139): Connect 0x4b32111a to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:52,251 DEBUG [Listener at localhost/39395] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@227ce6b3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:52,269 DEBUG [hconnection-0x422d8bf2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:52,285 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47710, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:52,297 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:14:52,298 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:52,308 DEBUG [Listener at localhost/39395] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 20:14:52,312 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57512, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 20:14:52,327 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 20:14:52,327 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:14:52,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 20:14:52,334 DEBUG [Listener at localhost/39395] zookeeper.ReadOnlyZKClient(139): Connect 0x11ddf8cf to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:52,340 DEBUG [Listener at localhost/39395] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a74af9e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:52,341 INFO [Listener at localhost/39395] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52937 2023-07-18 20:14:52,344 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:14:52,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017a1298a7000a connected 2023-07-18 20:14:52,385 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=425, OpenFileDescriptor=681, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=173, AvailableMemoryMB=3389 2023-07-18 20:14:52,388 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-18 20:14:52,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:52,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:52,491 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 20:14:52,507 INFO [Listener at localhost/39395] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:14:52,507 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:52,507 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:52,507 INFO [Listener at localhost/39395] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:14:52,507 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:14:52,508 INFO [Listener at localhost/39395] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:14:52,508 INFO [Listener at localhost/39395] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:14:52,514 INFO [Listener at localhost/39395] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46139 2023-07-18 20:14:52,515 INFO [Listener at localhost/39395] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:14:52,518 DEBUG [Listener at localhost/39395] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:14:52,520 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:52,524 INFO [Listener at localhost/39395] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:14:52,529 INFO [Listener at localhost/39395] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46139 connecting to ZooKeeper ensemble=127.0.0.1:52937 2023-07-18 20:14:52,534 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:461390x0, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:14:52,534 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(162): regionserver:461390x0, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:14:52,539 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(162): regionserver:461390x0, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 20:14:52,541 DEBUG [Listener at localhost/39395] zookeeper.ZKUtil(164): regionserver:461390x0, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:14:52,542 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46139-0x1017a1298a7000b connected 2023-07-18 20:14:52,546 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46139 2023-07-18 20:14:52,546 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46139 2023-07-18 20:14:52,554 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46139 2023-07-18 20:14:52,557 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46139 2023-07-18 20:14:52,562 DEBUG [Listener at localhost/39395] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46139 2023-07-18 20:14:52,565 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:14:52,566 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:14:52,566 INFO [Listener at localhost/39395] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:14:52,566 INFO [Listener at localhost/39395] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:14:52,566 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:14:52,566 INFO [Listener at localhost/39395] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:14:52,567 INFO [Listener at localhost/39395] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:14:52,567 INFO [Listener at localhost/39395] http.HttpServer(1146): Jetty bound to port 35449 2023-07-18 20:14:52,567 INFO [Listener at localhost/39395] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:14:52,575 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:52,576 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b0143bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:14:52,576 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:52,577 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6381a2d2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:14:52,736 INFO [Listener at localhost/39395] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:14:52,737 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:14:52,737 INFO [Listener at localhost/39395] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:14:52,737 INFO [Listener at localhost/39395] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 20:14:52,740 INFO [Listener at localhost/39395] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:14:52,741 INFO [Listener at localhost/39395] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@201e20bc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/java.io.tmpdir/jetty-0_0_0_0-35449-hbase-server-2_4_18-SNAPSHOT_jar-_-any-633647225516789643/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:14:52,743 INFO [Listener at localhost/39395] server.AbstractConnector(333): Started ServerConnector@266cf522{HTTP/1.1, (http/1.1)}{0.0.0.0:35449} 2023-07-18 20:14:52,743 INFO [Listener at localhost/39395] server.Server(415): Started @12033ms 2023-07-18 20:14:52,766 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(951): ClusterId : ff814b3c-b719-44f5-94c5-5c1b8ad72222 2023-07-18 20:14:52,768 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:14:52,771 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:14:52,771 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:14:52,774 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:14:52,777 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ReadOnlyZKClient(139): Connect 0x74ae934f to 127.0.0.1:52937 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:14:52,796 DEBUG [RS:3;jenkins-hbase4:46139] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a6963aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:14:52,796 DEBUG [RS:3;jenkins-hbase4:46139] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e56f83f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:14:52,806 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:46139 2023-07-18 20:14:52,806 INFO [RS:3;jenkins-hbase4:46139] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:14:52,806 INFO [RS:3;jenkins-hbase4:46139] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:14:52,806 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:14:52,807 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32929,1689711286630 with isa=jenkins-hbase4.apache.org/172.31.14.131:46139, startcode=1689711292506 2023-07-18 20:14:52,808 DEBUG [RS:3;jenkins-hbase4:46139] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:14:52,817 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40419, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:14:52,818 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32929] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,818 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:14:52,819 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67 2023-07-18 20:14:52,819 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37087 2023-07-18 20:14:52,819 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36101 2023-07-18 20:14:52,826 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:14:52,826 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:14:52,826 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:14:52,826 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:14:52,826 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:52,828 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:52,828 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:52,828 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:52,828 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46139,1689711292506] 2023-07-18 20:14:52,828 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:14:52,829 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,829 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,829 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ZKUtil(162): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,829 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,829 WARN [RS:3;jenkins-hbase4:46139] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:14:52,829 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:52,829 INFO [RS:3;jenkins-hbase4:46139] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:14:52,830 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:52,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:52,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:52,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:52,837 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32929,1689711286630] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 20:14:52,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:52,847 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ZKUtil(162): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:52,848 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ZKUtil(162): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,849 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ZKUtil(162): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:52,849 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ZKUtil(162): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:52,850 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:14:52,851 INFO [RS:3;jenkins-hbase4:46139] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:14:52,854 INFO [RS:3;jenkins-hbase4:46139] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:14:52,855 INFO [RS:3;jenkins-hbase4:46139] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:14:52,855 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:52,858 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:14:52,862 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,862 DEBUG [RS:3;jenkins-hbase4:46139] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:14:52,868 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:52,869 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:52,869 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:52,885 INFO [RS:3;jenkins-hbase4:46139] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:14:52,885 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46139,1689711292506-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:14:52,898 INFO [RS:3;jenkins-hbase4:46139] regionserver.Replication(203): jenkins-hbase4.apache.org,46139,1689711292506 started 2023-07-18 20:14:52,898 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46139,1689711292506, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46139, sessionid=0x1017a1298a7000b 2023-07-18 20:14:52,898 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:14:52,898 DEBUG [RS:3;jenkins-hbase4:46139] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,898 DEBUG [RS:3;jenkins-hbase4:46139] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46139,1689711292506' 2023-07-18 20:14:52,898 DEBUG [RS:3;jenkins-hbase4:46139] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:14:52,899 DEBUG [RS:3;jenkins-hbase4:46139] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:14:52,900 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:14:52,900 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:14:52,900 DEBUG [RS:3;jenkins-hbase4:46139] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:52,900 DEBUG [RS:3;jenkins-hbase4:46139] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46139,1689711292506' 2023-07-18 20:14:52,900 DEBUG [RS:3;jenkins-hbase4:46139] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:14:52,901 DEBUG [RS:3;jenkins-hbase4:46139] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:14:52,901 DEBUG [RS:3;jenkins-hbase4:46139] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:14:52,901 INFO [RS:3;jenkins-hbase4:46139] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:14:52,901 INFO [RS:3;jenkins-hbase4:46139] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:14:52,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:14:52,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:52,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:52,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:14:52,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:52,936 DEBUG [hconnection-0x2e79eb29-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:52,944 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47712, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:52,960 DEBUG [hconnection-0x2e79eb29-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:52,963 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:52,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:52,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:52,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:14:52,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:52,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:57512 deadline: 1689712492977, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:14:52,980 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:14:52,982 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:52,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:52,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:52,984 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:14:52,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:52,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:52,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:52,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:52,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:52,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:52,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:53,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:53,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:53,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:53,006 INFO [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46139%2C1689711292506, suffix=, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,46139,1689711292506, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs, maxLogs=32 2023-07-18 20:14:53,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:53,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:53,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:53,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:53,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:53,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:53,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:53,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(238): Moving server region f1d06ae394b6dc19534084668df26a36, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:53,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, REOPEN/MOVE 2023-07-18 20:14:53,025 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, REOPEN/MOVE 2023-07-18 20:14:53,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(238): Moving server region 22291b453f6d085322d417bcf0fb99d8, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:53,030 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f1d06ae394b6dc19534084668df26a36, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:53,030 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711293030"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711293030"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711293030"}]},"ts":"1689711293030"} 2023-07-18 20:14:53,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, REOPEN/MOVE 2023-07-18 20:14:53,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-18 20:14:53,032 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, REOPEN/MOVE 2023-07-18 20:14:53,033 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=22291b453f6d085322d417bcf0fb99d8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:53,033 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711293033"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711293033"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711293033"}]},"ts":"1689711293033"} 2023-07-18 20:14:53,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure f1d06ae394b6dc19534084668df26a36, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:53,040 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 22291b453f6d085322d417bcf0fb99d8, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:53,041 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:14:53,042 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:14:53,041 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:14:53,059 INFO [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,46139,1689711292506/jenkins-hbase4.apache.org%2C46139%2C1689711292506.1689711293007 2023-07-18 20:14:53,062 DEBUG [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK], DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK], DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK]] 2023-07-18 20:14:53,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f1d06ae394b6dc19534084668df26a36, disabling compactions & flushes 2023-07-18 20:14:53,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:53,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 22291b453f6d085322d417bcf0fb99d8, disabling compactions & flushes 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. after waiting 0 ms 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. after waiting 0 ms 2023-07-18 20:14:53,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:53,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f1d06ae394b6dc19534084668df26a36 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-18 20:14:53,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 22291b453f6d085322d417bcf0fb99d8 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-18 20:14:53,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/.tmp/info/4bdc187eaa864cbe88264c8e65e58f41 2023-07-18 20:14:53,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/.tmp/info/4bdc187eaa864cbe88264c8e65e58f41 as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info/4bdc187eaa864cbe88264c8e65e58f41 2023-07-18 20:14:53,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info/4bdc187eaa864cbe88264c8e65e58f41, entries=2, sequenceid=6, filesize=4.8 K 2023-07-18 20:14:53,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for f1d06ae394b6dc19534084668df26a36 in 249ms, sequenceid=6, compaction requested=false 2023-07-18 20:14:53,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 20:14:53,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-18 20:14:53,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f1d06ae394b6dc19534084668df26a36: 2023-07-18 20:14:53,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f1d06ae394b6dc19534084668df26a36 move to jenkins-hbase4.apache.org,46139,1689711292506 record at close sequenceid=6 2023-07-18 20:14:53,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,500 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f1d06ae394b6dc19534084668df26a36, regionState=CLOSED 2023-07-18 20:14:53,500 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711293500"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711293500"}]},"ts":"1689711293500"} 2023-07-18 20:14:53,506 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-18 20:14:53,506 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure f1d06ae394b6dc19534084668df26a36, server=jenkins-hbase4.apache.org,37953,1689711288586 in 468 msec 2023-07-18 20:14:53,507 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:14:53,657 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:14:53,658 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f1d06ae394b6dc19534084668df26a36, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:53,658 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711293658"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711293658"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711293658"}]},"ts":"1689711293658"} 2023-07-18 20:14:53,662 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=12, state=RUNNABLE; OpenRegionProcedure f1d06ae394b6dc19534084668df26a36, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:14:53,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/.tmp/m/7759cd7a942249519a7a6f5a0232ed36 2023-07-18 20:14:53,816 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:53,817 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:53,827 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37594, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:53,834 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f1d06ae394b6dc19534084668df26a36, NAME => 'hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:14:53,834 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/.tmp/m/7759cd7a942249519a7a6f5a0232ed36 as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m/7759cd7a942249519a7a6f5a0232ed36 2023-07-18 20:14:53,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,835 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:53,835 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,835 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,837 INFO [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,840 DEBUG [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info 2023-07-18 20:14:53,840 DEBUG [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info 2023-07-18 20:14:53,840 INFO [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f1d06ae394b6dc19534084668df26a36 columnFamilyName info 2023-07-18 20:14:53,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m/7759cd7a942249519a7a6f5a0232ed36, entries=3, sequenceid=9, filesize=5.2 K 2023-07-18 20:14:53,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for 22291b453f6d085322d417bcf0fb99d8 in 628ms, sequenceid=9, compaction requested=false 2023-07-18 20:14:53,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 20:14:53,865 DEBUG [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] regionserver.HStore(539): loaded hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/info/4bdc187eaa864cbe88264c8e65e58f41 2023-07-18 20:14:53,867 INFO [StoreOpener-f1d06ae394b6dc19534084668df26a36-1] regionserver.HStore(310): Store=f1d06ae394b6dc19534084668df26a36/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:53,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:14:53,881 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f1d06ae394b6dc19534084668df26a36; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10463471040, jitterRate=-0.02551332116127014}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:53,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f1d06ae394b6dc19534084668df26a36: 2023-07-18 20:14:53,892 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36., pid=16, masterSystemTime=1689711293816 2023-07-18 20:14:53,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 20:14:53,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:14:53,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:53,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 22291b453f6d085322d417bcf0fb99d8: 2023-07-18 20:14:53,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 22291b453f6d085322d417bcf0fb99d8 move to jenkins-hbase4.apache.org,46139,1689711292506 record at close sequenceid=9 2023-07-18 20:14:53,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,900 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:14:53,901 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f1d06ae394b6dc19534084668df26a36, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:53,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:53,902 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711293901"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711293901"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711293901"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711293901"}]},"ts":"1689711293901"} 2023-07-18 20:14:53,903 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=22291b453f6d085322d417bcf0fb99d8, regionState=CLOSED 2023-07-18 20:14:53,903 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711293903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711293903"}]},"ts":"1689711293903"} 2023-07-18 20:14:53,920 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-18 20:14:53,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; OpenRegionProcedure f1d06ae394b6dc19534084668df26a36, server=jenkins-hbase4.apache.org,46139,1689711292506 in 247 msec 2023-07-18 20:14:53,923 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-18 20:14:53,923 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 22291b453f6d085322d417bcf0fb99d8, server=jenkins-hbase4.apache.org,41243,1689711288943 in 871 msec 2023-07-18 20:14:53,924 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f1d06ae394b6dc19534084668df26a36, REOPEN/MOVE in 899 msec 2023-07-18 20:14:53,924 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:14:54,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-18 20:14:54,075 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:14:54,075 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=22291b453f6d085322d417bcf0fb99d8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:54,075 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711294075"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711294075"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711294075"}]},"ts":"1689711294075"} 2023-07-18 20:14:54,078 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; OpenRegionProcedure 22291b453f6d085322d417bcf0fb99d8, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:14:54,237 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:54,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 22291b453f6d085322d417bcf0fb99d8, NAME => 'hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:14:54,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:14:54,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. service=MultiRowMutationService 2023-07-18 20:14:54,237 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 20:14:54,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:54,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,240 INFO [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,241 DEBUG [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m 2023-07-18 20:14:54,241 DEBUG [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m 2023-07-18 20:14:54,242 INFO [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 22291b453f6d085322d417bcf0fb99d8 columnFamilyName m 2023-07-18 20:14:54,261 DEBUG [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] regionserver.HStore(539): loaded hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m/7759cd7a942249519a7a6f5a0232ed36 2023-07-18 20:14:54,261 INFO [StoreOpener-22291b453f6d085322d417bcf0fb99d8-1] regionserver.HStore(310): Store=22291b453f6d085322d417bcf0fb99d8/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:54,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,271 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:14:54,273 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 22291b453f6d085322d417bcf0fb99d8; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@53d2158c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:54,273 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 22291b453f6d085322d417bcf0fb99d8: 2023-07-18 20:14:54,275 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8., pid=17, masterSystemTime=1689711294230 2023-07-18 20:14:54,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:54,278 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:14:54,279 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=22291b453f6d085322d417bcf0fb99d8, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:54,279 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711294279"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711294279"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711294279"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711294279"}]},"ts":"1689711294279"} 2023-07-18 20:14:54,285 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-18 20:14:54,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; OpenRegionProcedure 22291b453f6d085322d417bcf0fb99d8, server=jenkins-hbase4.apache.org,46139,1689711292506 in 204 msec 2023-07-18 20:14:54,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=22291b453f6d085322d417bcf0fb99d8, REOPEN/MOVE in 1.2580 sec 2023-07-18 20:14:55,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=13 2023-07-18 20:14:55,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to default 2023-07-18 20:14:55,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:55,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:55,036 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41243] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:56302 deadline: 1689711355036, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46139 startCode=1689711292506. As of locationSeqNum=9. 2023-07-18 20:14:55,142 DEBUG [hconnection-0x2e79eb29-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:55,144 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37596, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:55,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:55,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:55,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:55,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:55,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:14:55,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:55,191 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:14:55,194 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41243] ipc.CallRunner(144): callId: 49 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:56300 deadline: 1689711355194, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46139 startCode=1689711292506. As of locationSeqNum=9. 2023-07-18 20:14:55,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-18 20:14:55,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 20:14:55,298 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:14:55,300 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37608, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:14:55,304 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:55,304 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:55,304 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:55,305 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:55,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 20:14:55,309 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:14:55,315 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,315 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,315 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,315 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,315 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,316 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b empty. 2023-07-18 20:14:55,316 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 empty. 2023-07-18 20:14:55,316 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 empty. 2023-07-18 20:14:55,316 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb empty. 2023-07-18 20:14:55,317 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,317 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,317 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,317 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,318 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba empty. 2023-07-18 20:14:55,318 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,319 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 20:14:55,340 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 20:14:55,342 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 70d79902e8f20b1d9a452ee5a0099663, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:55,342 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 05ca6674e12d8b9532230e53f1bec42b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:55,342 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5c6249d9bcb21c629b916739dffc3fcb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:55,378 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,379 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 05ca6674e12d8b9532230e53f1bec42b, disabling compactions & flushes 2023-07-18 20:14:55,379 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,379 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,379 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. after waiting 0 ms 2023-07-18 20:14:55,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,380 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 05ca6674e12d8b9532230e53f1bec42b: 2023-07-18 20:14:55,380 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => b4433e084e7ccfd588e9fef23cea6f75, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:55,381 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,382 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5c6249d9bcb21c629b916739dffc3fcb, disabling compactions & flushes 2023-07-18 20:14:55,382 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,382 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,382 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. after waiting 0 ms 2023-07-18 20:14:55,382 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,382 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,382 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5c6249d9bcb21c629b916739dffc3fcb: 2023-07-18 20:14:55,383 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8e6841df692dc100742e29693d287aba, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:55,383 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 70d79902e8f20b1d9a452ee5a0099663, disabling compactions & flushes 2023-07-18 20:14:55,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. after waiting 0 ms 2023-07-18 20:14:55,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,385 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 70d79902e8f20b1d9a452ee5a0099663: 2023-07-18 20:14:55,407 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing b4433e084e7ccfd588e9fef23cea6f75, disabling compactions & flushes 2023-07-18 20:14:55,408 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. after waiting 0 ms 2023-07-18 20:14:55,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,408 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,409 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for b4433e084e7ccfd588e9fef23cea6f75: 2023-07-18 20:14:55,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8e6841df692dc100742e29693d287aba, disabling compactions & flushes 2023-07-18 20:14:55,411 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. after waiting 0 ms 2023-07-18 20:14:55,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,411 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8e6841df692dc100742e29693d287aba: 2023-07-18 20:14:55,414 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:14:55,415 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711295415"}]},"ts":"1689711295415"} 2023-07-18 20:14:55,416 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711295415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711295415"}]},"ts":"1689711295415"} 2023-07-18 20:14:55,416 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711295415"}]},"ts":"1689711295415"} 2023-07-18 20:14:55,416 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711295415"}]},"ts":"1689711295415"} 2023-07-18 20:14:55,416 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711295415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711295415"}]},"ts":"1689711295415"} 2023-07-18 20:14:55,462 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 20:14:55,463 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:14:55,463 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711295463"}]},"ts":"1689711295463"} 2023-07-18 20:14:55,465 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 20:14:55,474 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:55,474 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:55,475 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:55,475 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:55,475 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, ASSIGN}] 2023-07-18 20:14:55,478 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, ASSIGN 2023-07-18 20:14:55,478 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, ASSIGN 2023-07-18 20:14:55,479 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, ASSIGN 2023-07-18 20:14:55,479 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, ASSIGN 2023-07-18 20:14:55,480 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, ASSIGN 2023-07-18 20:14:55,480 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:14:55,480 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:14:55,480 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:14:55,480 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:14:55,481 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:14:55,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 20:14:55,630 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 20:14:55,633 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:55,633 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:55,633 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:55,634 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711295633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711295633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711295633"}]},"ts":"1689711295633"} 2023-07-18 20:14:55,633 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:55,633 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:55,634 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711295633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711295633"}]},"ts":"1689711295633"} 2023-07-18 20:14:55,634 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711295633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711295633"}]},"ts":"1689711295633"} 2023-07-18 20:14:55,634 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711295633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711295633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711295633"}]},"ts":"1689711295633"} 2023-07-18 20:14:55,634 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711295633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711295633"}]},"ts":"1689711295633"} 2023-07-18 20:14:55,637 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE; OpenRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:55,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=20, state=RUNNABLE; OpenRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:55,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=21, state=RUNNABLE; OpenRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:55,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=19, state=RUNNABLE; OpenRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:14:55,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=22, state=RUNNABLE; OpenRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:14:55,802 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 70d79902e8f20b1d9a452ee5a0099663, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 20:14:55,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,805 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c6249d9bcb21c629b916739dffc3fcb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 20:14:55,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,811 INFO [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 20:14:55,814 DEBUG [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/f 2023-07-18 20:14:55,814 DEBUG [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/f 2023-07-18 20:14:55,816 INFO [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 70d79902e8f20b1d9a452ee5a0099663 columnFamilyName f 2023-07-18 20:14:55,817 INFO [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] regionserver.HStore(310): Store=70d79902e8f20b1d9a452ee5a0099663/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:55,818 INFO [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,822 DEBUG [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/f 2023-07-18 20:14:55,822 DEBUG [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/f 2023-07-18 20:14:55,823 INFO [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c6249d9bcb21c629b916739dffc3fcb columnFamilyName f 2023-07-18 20:14:55,824 INFO [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] regionserver.HStore(310): Store=5c6249d9bcb21c629b916739dffc3fcb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:55,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:55,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:55,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:55,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 70d79902e8f20b1d9a452ee5a0099663; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11198839040, jitterRate=0.04297316074371338}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:55,838 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 70d79902e8f20b1d9a452ee5a0099663: 2023-07-18 20:14:55,843 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663., pid=26, masterSystemTime=1689711295791 2023-07-18 20:14:55,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:55,848 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c6249d9bcb21c629b916739dffc3fcb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10052102080, jitterRate=-0.06382504105567932}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:55,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c6249d9bcb21c629b916739dffc3fcb: 2023-07-18 20:14:55,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,849 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:55,849 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05ca6674e12d8b9532230e53f1bec42b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 20:14:55,849 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:55,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,850 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295849"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711295849"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711295849"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711295849"}]},"ts":"1689711295849"} 2023-07-18 20:14:55,855 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb., pid=27, masterSystemTime=1689711295798 2023-07-18 20:14:55,857 INFO [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=21 2023-07-18 20:14:55,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; OpenRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,43019,1689711288774 in 211 msec 2023-07-18 20:14:55,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:55,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4433e084e7ccfd588e9fef23cea6f75, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 20:14:55,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,860 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:55,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,860 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711295860"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711295860"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711295860"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711295860"}]},"ts":"1689711295860"} 2023-07-18 20:14:55,861 DEBUG [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/f 2023-07-18 20:14:55,861 DEBUG [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/f 2023-07-18 20:14:55,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, ASSIGN in 383 msec 2023-07-18 20:14:55,862 INFO [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05ca6674e12d8b9532230e53f1bec42b columnFamilyName f 2023-07-18 20:14:55,865 INFO [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,866 INFO [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] regionserver.HStore(310): Store=05ca6674e12d8b9532230e53f1bec42b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:55,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=19 2023-07-18 20:14:55,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=19, state=SUCCESS; OpenRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,46139,1689711292506 in 220 msec 2023-07-18 20:14:55,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,872 DEBUG [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/f 2023-07-18 20:14:55,872 DEBUG [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/f 2023-07-18 20:14:55,873 INFO [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4433e084e7ccfd588e9fef23cea6f75 columnFamilyName f 2023-07-18 20:14:55,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, ASSIGN in 396 msec 2023-07-18 20:14:55,874 INFO [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] regionserver.HStore(310): Store=b4433e084e7ccfd588e9fef23cea6f75/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:55,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:55,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:55,892 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05ca6674e12d8b9532230e53f1bec42b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11161806720, jitterRate=0.03952425718307495}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:55,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05ca6674e12d8b9532230e53f1bec42b: 2023-07-18 20:14:55,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:55,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b., pid=25, masterSystemTime=1689711295791 2023-07-18 20:14:55,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4433e084e7ccfd588e9fef23cea6f75; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11377116640, jitterRate=0.05957655608654022}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:55,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4433e084e7ccfd588e9fef23cea6f75: 2023-07-18 20:14:55,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75., pid=28, masterSystemTime=1689711295798 2023-07-18 20:14:55,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:55,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8e6841df692dc100742e29693d287aba, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 20:14:55,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,906 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:55,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:55,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,906 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295905"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711295905"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711295905"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711295905"}]},"ts":"1689711295905"} 2023-07-18 20:14:55,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:55,909 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:55,910 INFO [StoreOpener-8e6841df692dc100742e29693d287aba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,910 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711295908"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711295908"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711295908"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711295908"}]},"ts":"1689711295908"} 2023-07-18 20:14:55,913 DEBUG [StoreOpener-8e6841df692dc100742e29693d287aba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/f 2023-07-18 20:14:55,913 DEBUG [StoreOpener-8e6841df692dc100742e29693d287aba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/f 2023-07-18 20:14:55,915 INFO [StoreOpener-8e6841df692dc100742e29693d287aba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8e6841df692dc100742e29693d287aba columnFamilyName f 2023-07-18 20:14:55,916 INFO [StoreOpener-8e6841df692dc100742e29693d287aba-1] regionserver.HStore(310): Store=8e6841df692dc100742e29693d287aba/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:55,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=20 2023-07-18 20:14:55,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=20, state=SUCCESS; OpenRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,43019,1689711288774 in 275 msec 2023-07-18 20:14:55,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=22 2023-07-18 20:14:55,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=22, state=SUCCESS; OpenRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,46139,1689711292506 in 271 msec 2023-07-18 20:14:55,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:55,930 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, ASSIGN in 451 msec 2023-07-18 20:14:55,931 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, ASSIGN in 450 msec 2023-07-18 20:14:55,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:55,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8e6841df692dc100742e29693d287aba; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9807843520, jitterRate=-0.08657339215278625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:55,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8e6841df692dc100742e29693d287aba: 2023-07-18 20:14:55,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba., pid=24, masterSystemTime=1689711295791 2023-07-18 20:14:55,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:55,943 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:55,943 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711295943"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711295943"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711295943"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711295943"}]},"ts":"1689711295943"} 2023-07-18 20:14:55,953 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-18 20:14:55,953 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; OpenRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,43019,1689711288774 in 309 msec 2023-07-18 20:14:55,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=18 2023-07-18 20:14:55,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, ASSIGN in 478 msec 2023-07-18 20:14:55,959 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:14:55,959 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711295959"}]},"ts":"1689711295959"} 2023-07-18 20:14:55,961 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 20:14:55,967 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:14:55,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 781 msec 2023-07-18 20:14:56,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 20:14:56,317 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-18 20:14:56,317 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-18 20:14:56,320 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:56,336 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-18 20:14:56,337 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:56,337 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-18 20:14:56,338 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:56,342 DEBUG [Listener at localhost/39395] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:56,345 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37076, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:56,349 DEBUG [Listener at localhost/39395] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:56,350 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56304, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:56,351 DEBUG [Listener at localhost/39395] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:56,355 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47726, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:56,357 DEBUG [Listener at localhost/39395] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:14:56,360 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:14:56,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:56,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:14:56,374 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:56,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:56,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:56,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 5c6249d9bcb21c629b916739dffc3fcb to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:56,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:56,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:56,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:56,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:56,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, REOPEN/MOVE 2023-07-18 20:14:56,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 05ca6674e12d8b9532230e53f1bec42b to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,398 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, REOPEN/MOVE 2023-07-18 20:14:56,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:56,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:56,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:56,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:56,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:56,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, REOPEN/MOVE 2023-07-18 20:14:56,400 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:56,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 70d79902e8f20b1d9a452ee5a0099663 to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,401 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296400"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296400"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296400"}]},"ts":"1689711296400"} 2023-07-18 20:14:56,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:56,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:56,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:56,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:56,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:56,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:14:56,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, REOPEN/MOVE 2023-07-18 20:14:56,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region b4433e084e7ccfd588e9fef23cea6f75 to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:56,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:56,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:56,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:56,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:56,408 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, REOPEN/MOVE 2023-07-18 20:14:56,409 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, REOPEN/MOVE 2023-07-18 20:14:56,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, REOPEN/MOVE 2023-07-18 20:14:56,410 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:56,411 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, REOPEN/MOVE 2023-07-18 20:14:56,412 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296410"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296410"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296410"}]},"ts":"1689711296410"} 2023-07-18 20:14:56,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 8e6841df692dc100742e29693d287aba to RSGroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:56,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:56,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:56,412 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:56,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:56,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:14:56,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:56,414 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:14:56,414 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296414"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296414"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296414"}]},"ts":"1689711296414"} 2023-07-18 20:14:56,413 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296412"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296412"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296412"}]},"ts":"1689711296412"} 2023-07-18 20:14:56,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, REOPEN/MOVE 2023-07-18 20:14:56,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1360591523, current retry=0 2023-07-18 20:14:56,417 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=30, state=RUNNABLE; CloseRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:56,418 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, REOPEN/MOVE 2023-07-18 20:14:56,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=33, state=RUNNABLE; CloseRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:14:56,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=31, state=RUNNABLE; CloseRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:56,422 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:14:56,422 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296421"}]},"ts":"1689711296421"} 2023-07-18 20:14:56,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=34, state=RUNNABLE; CloseRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:56,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:56,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c6249d9bcb21c629b916739dffc3fcb, disabling compactions & flushes 2023-07-18 20:14:56,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:56,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:56,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. after waiting 0 ms 2023-07-18 20:14:56,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:56,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8e6841df692dc100742e29693d287aba, disabling compactions & flushes 2023-07-18 20:14:56,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. after waiting 0 ms 2023-07-18 20:14:56,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:56,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:56,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c6249d9bcb21c629b916739dffc3fcb: 2023-07-18 20:14:56,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5c6249d9bcb21c629b916739dffc3fcb move to jenkins-hbase4.apache.org,41243,1689711288943 record at close sequenceid=2 2023-07-18 20:14:56,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:56,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4433e084e7ccfd588e9fef23cea6f75, disabling compactions & flushes 2023-07-18 20:14:56,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. after waiting 0 ms 2023-07-18 20:14:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:56,627 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=CLOSED 2023-07-18 20:14:56,627 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711296627"}]},"ts":"1689711296627"} 2023-07-18 20:14:56,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-18 20:14:56,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,46139,1689711292506 in 226 msec 2023-07-18 20:14:56,634 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:56,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:56,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8e6841df692dc100742e29693d287aba: 2023-07-18 20:14:56,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8e6841df692dc100742e29693d287aba move to jenkins-hbase4.apache.org,37953,1689711288586 record at close sequenceid=2 2023-07-18 20:14:56,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05ca6674e12d8b9532230e53f1bec42b, disabling compactions & flushes 2023-07-18 20:14:56,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. after waiting 0 ms 2023-07-18 20:14:56,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,668 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=CLOSED 2023-07-18 20:14:56,668 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296668"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711296668"}]},"ts":"1689711296668"} 2023-07-18 20:14:56,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:56,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:56,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4433e084e7ccfd588e9fef23cea6f75: 2023-07-18 20:14:56,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b4433e084e7ccfd588e9fef23cea6f75 move to jenkins-hbase4.apache.org,41243,1689711288943 record at close sequenceid=2 2023-07-18 20:14:56,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:56,679 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=34 2023-07-18 20:14:56,679 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=CLOSED 2023-07-18 20:14:56,679 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=34, state=SUCCESS; CloseRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,43019,1689711288774 in 246 msec 2023-07-18 20:14:56,679 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296679"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711296679"}]},"ts":"1689711296679"} 2023-07-18 20:14:56,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05ca6674e12d8b9532230e53f1bec42b: 2023-07-18 20:14:56,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 05ca6674e12d8b9532230e53f1bec42b move to jenkins-hbase4.apache.org,41243,1689711288943 record at close sequenceid=2 2023-07-18 20:14:56,680 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:14:56,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:56,685 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 70d79902e8f20b1d9a452ee5a0099663, disabling compactions & flushes 2023-07-18 20:14:56,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:56,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:56,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. after waiting 0 ms 2023-07-18 20:14:56,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:56,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=33 2023-07-18 20:14:56,695 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=CLOSED 2023-07-18 20:14:56,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=33, state=SUCCESS; CloseRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,46139,1689711292506 in 262 msec 2023-07-18 20:14:56,695 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711296695"}]},"ts":"1689711296695"} 2023-07-18 20:14:56,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:56,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:56,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 70d79902e8f20b1d9a452ee5a0099663: 2023-07-18 20:14:56,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 70d79902e8f20b1d9a452ee5a0099663 move to jenkins-hbase4.apache.org,41243,1689711288943 record at close sequenceid=2 2023-07-18 20:14:56,698 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:56,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:56,701 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=30 2023-07-18 20:14:56,702 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=30, state=SUCCESS; CloseRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,43019,1689711288774 in 282 msec 2023-07-18 20:14:56,703 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:56,705 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=CLOSED 2023-07-18 20:14:56,705 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296705"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711296705"}]},"ts":"1689711296705"} 2023-07-18 20:14:56,710 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=31 2023-07-18 20:14:56,710 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=31, state=SUCCESS; CloseRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,43019,1689711288774 in 287 msec 2023-07-18 20:14:56,712 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:56,785 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 20:14:56,785 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:56,785 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:56,785 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:56,785 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:56,785 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:56,786 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296785"}]},"ts":"1689711296785"} 2023-07-18 20:14:56,786 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296785"}]},"ts":"1689711296785"} 2023-07-18 20:14:56,786 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296785"}]},"ts":"1689711296785"} 2023-07-18 20:14:56,786 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296785"}]},"ts":"1689711296785"} 2023-07-18 20:14:56,786 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711296785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711296785"}]},"ts":"1689711296785"} 2023-07-18 20:14:56,789 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=33, state=RUNNABLE; OpenRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:56,790 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=30, state=RUNNABLE; OpenRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:56,792 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=34, state=RUNNABLE; OpenRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:56,794 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=31, state=RUNNABLE; OpenRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:56,796 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=29, state=RUNNABLE; OpenRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:56,836 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 20:14:56,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05ca6674e12d8b9532230e53f1bec42b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 20:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8e6841df692dc100742e29693d287aba, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 20:14:56,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:56,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,952 INFO [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,954 DEBUG [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/f 2023-07-18 20:14:56,954 DEBUG [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/f 2023-07-18 20:14:56,954 INFO [StoreOpener-8e6841df692dc100742e29693d287aba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,955 INFO [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05ca6674e12d8b9532230e53f1bec42b columnFamilyName f 2023-07-18 20:14:56,956 INFO [StoreOpener-05ca6674e12d8b9532230e53f1bec42b-1] regionserver.HStore(310): Store=05ca6674e12d8b9532230e53f1bec42b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:56,957 DEBUG [StoreOpener-8e6841df692dc100742e29693d287aba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/f 2023-07-18 20:14:56,957 DEBUG [StoreOpener-8e6841df692dc100742e29693d287aba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/f 2023-07-18 20:14:56,957 INFO [StoreOpener-8e6841df692dc100742e29693d287aba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8e6841df692dc100742e29693d287aba columnFamilyName f 2023-07-18 20:14:56,958 INFO [StoreOpener-8e6841df692dc100742e29693d287aba-1] regionserver.HStore(310): Store=8e6841df692dc100742e29693d287aba/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:56,959 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 20:14:56,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,960 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 20:14:56,961 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 20:14:56,961 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:14:56,961 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 20:14:56,962 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 20:14:56,962 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 20:14:56,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:56,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8e6841df692dc100742e29693d287aba; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10656242560, jitterRate=-0.007560074329376221}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:56,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8e6841df692dc100742e29693d287aba: 2023-07-18 20:14:56,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba., pid=41, masterSystemTime=1689711296945 2023-07-18 20:14:56,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:56,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:56,973 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:56,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05ca6674e12d8b9532230e53f1bec42b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11934031360, jitterRate=0.11144328117370605}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:56,974 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711296973"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711296973"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711296973"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711296973"}]},"ts":"1689711296973"} 2023-07-18 20:14:56,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05ca6674e12d8b9532230e53f1bec42b: 2023-07-18 20:14:56,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b., pid=40, masterSystemTime=1689711296942 2023-07-18 20:14:56,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:56,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:56,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4433e084e7ccfd588e9fef23cea6f75, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 20:14:56,979 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:56,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,980 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711296979"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711296979"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711296979"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711296979"}]},"ts":"1689711296979"} 2023-07-18 20:14:56,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:56,980 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=34 2023-07-18 20:14:56,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,980 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=34, state=SUCCESS; OpenRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,37953,1689711288586 in 184 msec 2023-07-18 20:14:56,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,991 INFO [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,991 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, REOPEN/MOVE in 567 msec 2023-07-18 20:14:56,995 DEBUG [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/f 2023-07-18 20:14:56,995 DEBUG [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/f 2023-07-18 20:14:56,996 INFO [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4433e084e7ccfd588e9fef23cea6f75 columnFamilyName f 2023-07-18 20:14:56,997 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=30 2023-07-18 20:14:56,997 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=30, state=SUCCESS; OpenRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,41243,1689711288943 in 201 msec 2023-07-18 20:14:56,997 INFO [StoreOpener-b4433e084e7ccfd588e9fef23cea6f75-1] regionserver.HStore(310): Store=b4433e084e7ccfd588e9fef23cea6f75/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:56,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:56,999 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, REOPEN/MOVE in 599 msec 2023-07-18 20:14:57,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:57,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:57,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4433e084e7ccfd588e9fef23cea6f75; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9485338400, jitterRate=-0.11660902202129364}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:57,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4433e084e7ccfd588e9fef23cea6f75: 2023-07-18 20:14:57,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75., pid=39, masterSystemTime=1689711296942 2023-07-18 20:14:57,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:57,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:57,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c6249d9bcb21c629b916739dffc3fcb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 20:14:57,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,008 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:57,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,008 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297008"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711297008"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711297008"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711297008"}]},"ts":"1689711297008"} 2023-07-18 20:14:57,010 INFO [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,011 DEBUG [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/f 2023-07-18 20:14:57,011 DEBUG [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/f 2023-07-18 20:14:57,012 INFO [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c6249d9bcb21c629b916739dffc3fcb columnFamilyName f 2023-07-18 20:14:57,013 INFO [StoreOpener-5c6249d9bcb21c629b916739dffc3fcb-1] regionserver.HStore(310): Store=5c6249d9bcb21c629b916739dffc3fcb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:57,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=33 2023-07-18 20:14:57,016 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=33, state=SUCCESS; OpenRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,41243,1689711288943 in 222 msec 2023-07-18 20:14:57,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, REOPEN/MOVE in 610 msec 2023-07-18 20:14:57,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c6249d9bcb21c629b916739dffc3fcb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9823353760, jitterRate=-0.08512888848781586}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:57,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c6249d9bcb21c629b916739dffc3fcb: 2023-07-18 20:14:57,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb., pid=43, masterSystemTime=1689711296942 2023-07-18 20:14:57,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 70d79902e8f20b1d9a452ee5a0099663, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 20:14:57,023 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:57,023 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711297023"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711297023"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711297023"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711297023"}]},"ts":"1689711297023"} 2023-07-18 20:14:57,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,025 INFO [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,027 DEBUG [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/f 2023-07-18 20:14:57,027 DEBUG [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/f 2023-07-18 20:14:57,028 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=29 2023-07-18 20:14:57,028 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=29, state=SUCCESS; OpenRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,41243,1689711288943 in 229 msec 2023-07-18 20:14:57,028 INFO [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 70d79902e8f20b1d9a452ee5a0099663 columnFamilyName f 2023-07-18 20:14:57,029 INFO [StoreOpener-70d79902e8f20b1d9a452ee5a0099663-1] regionserver.HStore(310): Store=70d79902e8f20b1d9a452ee5a0099663/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:57,029 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, REOPEN/MOVE in 633 msec 2023-07-18 20:14:57,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 70d79902e8f20b1d9a452ee5a0099663; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9438617760, jitterRate=-0.12096022069454193}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:57,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 70d79902e8f20b1d9a452ee5a0099663: 2023-07-18 20:14:57,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663., pid=42, masterSystemTime=1689711296942 2023-07-18 20:14:57,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,039 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,040 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,040 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297040"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711297040"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711297040"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711297040"}]},"ts":"1689711297040"} 2023-07-18 20:14:57,044 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=31 2023-07-18 20:14:57,044 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=31, state=SUCCESS; OpenRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,41243,1689711288943 in 248 msec 2023-07-18 20:14:57,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, REOPEN/MOVE in 643 msec 2023-07-18 20:14:57,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-18 20:14:57,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1360591523. 2023-07-18 20:14:57,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:57,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:57,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:57,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:57,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:14:57,430 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:57,436 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:57,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:57,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:57,455 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711297454"}]},"ts":"1689711297454"} 2023-07-18 20:14:57,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-18 20:14:57,456 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 20:14:57,458 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 20:14:57,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, UNASSIGN}] 2023-07-18 20:14:57,464 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, UNASSIGN 2023-07-18 20:14:57,464 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, UNASSIGN 2023-07-18 20:14:57,464 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, UNASSIGN 2023-07-18 20:14:57,464 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, UNASSIGN 2023-07-18 20:14:57,465 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, UNASSIGN 2023-07-18 20:14:57,465 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,466 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,466 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,466 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297465"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711297465"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711297465"}]},"ts":"1689711297465"} 2023-07-18 20:14:57,466 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297465"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711297465"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711297465"}]},"ts":"1689711297465"} 2023-07-18 20:14:57,466 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:57,466 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711297466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711297466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711297466"}]},"ts":"1689711297466"} 2023-07-18 20:14:57,466 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711297466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711297466"}]},"ts":"1689711297466"} 2023-07-18 20:14:57,467 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:57,467 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711297466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711297466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711297466"}]},"ts":"1689711297466"} 2023-07-18 20:14:57,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=46, state=RUNNABLE; CloseRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:57,469 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=48, state=RUNNABLE; CloseRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:57,470 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=45, state=RUNNABLE; CloseRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:57,472 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=47, state=RUNNABLE; CloseRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:57,473 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:57,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-18 20:14:57,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:57,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05ca6674e12d8b9532230e53f1bec42b, disabling compactions & flushes 2023-07-18 20:14:57,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:57,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:57,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. after waiting 0 ms 2023-07-18 20:14:57,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:57,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:57,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8e6841df692dc100742e29693d287aba, disabling compactions & flushes 2023-07-18 20:14:57,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:57,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:57,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. after waiting 0 ms 2023-07-18 20:14:57,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:57,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:14:57,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b. 2023-07-18 20:14:57,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05ca6674e12d8b9532230e53f1bec42b: 2023-07-18 20:14:57,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:14:57,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba. 2023-07-18 20:14:57,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8e6841df692dc100742e29693d287aba: 2023-07-18 20:14:57,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:57,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 70d79902e8f20b1d9a452ee5a0099663, disabling compactions & flushes 2023-07-18 20:14:57,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. after waiting 0 ms 2023-07-18 20:14:57,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,640 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=05ca6674e12d8b9532230e53f1bec42b, regionState=CLOSED 2023-07-18 20:14:57,640 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297639"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711297639"}]},"ts":"1689711297639"} 2023-07-18 20:14:57,642 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8e6841df692dc100742e29693d287aba 2023-07-18 20:14:57,644 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=8e6841df692dc100742e29693d287aba, regionState=CLOSED 2023-07-18 20:14:57,644 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711297644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711297644"}]},"ts":"1689711297644"} 2023-07-18 20:14:57,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=46 2023-07-18 20:14:57,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=46, state=SUCCESS; CloseRegionProcedure 05ca6674e12d8b9532230e53f1bec42b, server=jenkins-hbase4.apache.org,41243,1689711288943 in 178 msec 2023-07-18 20:14:57,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-18 20:14:57,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure 8e6841df692dc100742e29693d287aba, server=jenkins-hbase4.apache.org,37953,1689711288586 in 175 msec 2023-07-18 20:14:57,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ca6674e12d8b9532230e53f1bec42b, UNASSIGN in 190 msec 2023-07-18 20:14:57,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e6841df692dc100742e29693d287aba, UNASSIGN in 192 msec 2023-07-18 20:14:57,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:14:57,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663. 2023-07-18 20:14:57,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 70d79902e8f20b1d9a452ee5a0099663: 2023-07-18 20:14:57,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c6249d9bcb21c629b916739dffc3fcb, disabling compactions & flushes 2023-07-18 20:14:57,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. after waiting 0 ms 2023-07-18 20:14:57,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,671 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=70d79902e8f20b1d9a452ee5a0099663, regionState=CLOSED 2023-07-18 20:14:57,671 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297671"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711297671"}]},"ts":"1689711297671"} 2023-07-18 20:14:57,676 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=47 2023-07-18 20:14:57,677 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=47, state=SUCCESS; CloseRegionProcedure 70d79902e8f20b1d9a452ee5a0099663, server=jenkins-hbase4.apache.org,41243,1689711288943 in 201 msec 2023-07-18 20:14:57,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:14:57,682 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=70d79902e8f20b1d9a452ee5a0099663, UNASSIGN in 217 msec 2023-07-18 20:14:57,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb. 2023-07-18 20:14:57,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c6249d9bcb21c629b916739dffc3fcb: 2023-07-18 20:14:57,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:57,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4433e084e7ccfd588e9fef23cea6f75, disabling compactions & flushes 2023-07-18 20:14:57,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:57,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:57,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. after waiting 0 ms 2023-07-18 20:14:57,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:57,689 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=5c6249d9bcb21c629b916739dffc3fcb, regionState=CLOSED 2023-07-18 20:14:57,689 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711297688"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711297688"}]},"ts":"1689711297688"} 2023-07-18 20:14:57,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=45 2023-07-18 20:14:57,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=45, state=SUCCESS; CloseRegionProcedure 5c6249d9bcb21c629b916739dffc3fcb, server=jenkins-hbase4.apache.org,41243,1689711288943 in 221 msec 2023-07-18 20:14:57,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:14:57,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c6249d9bcb21c629b916739dffc3fcb, UNASSIGN in 234 msec 2023-07-18 20:14:57,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75. 2023-07-18 20:14:57,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4433e084e7ccfd588e9fef23cea6f75: 2023-07-18 20:14:57,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:57,700 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b4433e084e7ccfd588e9fef23cea6f75, regionState=CLOSED 2023-07-18 20:14:57,701 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711297700"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711297700"}]},"ts":"1689711297700"} 2023-07-18 20:14:57,707 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=48 2023-07-18 20:14:57,707 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=48, state=SUCCESS; CloseRegionProcedure b4433e084e7ccfd588e9fef23cea6f75, server=jenkins-hbase4.apache.org,41243,1689711288943 in 233 msec 2023-07-18 20:14:57,709 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=44 2023-07-18 20:14:57,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4433e084e7ccfd588e9fef23cea6f75, UNASSIGN in 247 msec 2023-07-18 20:14:57,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711297711"}]},"ts":"1689711297711"} 2023-07-18 20:14:57,712 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 20:14:57,716 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 20:14:57,723 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 274 msec 2023-07-18 20:14:57,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-18 20:14:57,759 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-18 20:14:57,760 INFO [Listener at localhost/39395] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:57,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:57,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-18 20:14:57,782 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-18 20:14:57,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 20:14:57,799 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:57,799 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,799 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,800 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:57,800 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:57,806 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/recovered.edits] 2023-07-18 20:14:57,806 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/recovered.edits] 2023-07-18 20:14:57,806 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/recovered.edits] 2023-07-18 20:14:57,806 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/recovered.edits] 2023-07-18 20:14:57,813 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/recovered.edits] 2023-07-18 20:14:57,829 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663/recovered.edits/7.seqid 2023-07-18 20:14:57,830 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b/recovered.edits/7.seqid 2023-07-18 20:14:57,831 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/70d79902e8f20b1d9a452ee5a0099663 2023-07-18 20:14:57,831 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba/recovered.edits/7.seqid 2023-07-18 20:14:57,832 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ca6674e12d8b9532230e53f1bec42b 2023-07-18 20:14:57,833 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e6841df692dc100742e29693d287aba 2023-07-18 20:14:57,833 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75/recovered.edits/7.seqid 2023-07-18 20:14:57,834 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb/recovered.edits/7.seqid 2023-07-18 20:14:57,835 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4433e084e7ccfd588e9fef23cea6f75 2023-07-18 20:14:57,835 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c6249d9bcb21c629b916739dffc3fcb 2023-07-18 20:14:57,835 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 20:14:57,867 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 20:14:57,871 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 20:14:57,872 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 20:14:57,872 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711297872"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:57,872 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711297872"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:57,873 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711297872"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:57,873 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711297872"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:57,873 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711297872"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:57,881 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 20:14:57,882 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5c6249d9bcb21c629b916739dffc3fcb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711295183.5c6249d9bcb21c629b916739dffc3fcb.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 05ca6674e12d8b9532230e53f1bec42b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711295183.05ca6674e12d8b9532230e53f1bec42b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 70d79902e8f20b1d9a452ee5a0099663, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711295183.70d79902e8f20b1d9a452ee5a0099663.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b4433e084e7ccfd588e9fef23cea6f75, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711295183.b4433e084e7ccfd588e9fef23cea6f75.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8e6841df692dc100742e29693d287aba, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711295183.8e6841df692dc100742e29693d287aba.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 20:14:57,882 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 20:14:57,882 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711297882"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:57,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 20:14:57,884 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 20:14:57,893 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:57,893 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:57,893 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:57,893 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:57,893 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:57,894 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc empty. 2023-07-18 20:14:57,894 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c empty. 2023-07-18 20:14:57,895 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f empty. 2023-07-18 20:14:57,895 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 empty. 2023-07-18 20:14:57,895 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed empty. 2023-07-18 20:14:57,895 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:57,895 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:57,896 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:57,896 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:57,896 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:57,896 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 20:14:57,931 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 20:14:57,932 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 58898cfd2be014c46c3a391817cb5afc, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:57,936 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6ee6b625580cb30148c9f8a3bdab9a5f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:57,942 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => a1bbe3ec05a1c5a539f16f0d42fd6e9c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:58,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 58898cfd2be014c46c3a391817cb5afc, disabling compactions & flushes 2023-07-18 20:14:58,012 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. after waiting 0 ms 2023-07-18 20:14:58,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,012 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 58898cfd2be014c46c3a391817cb5afc: 2023-07-18 20:14:58,013 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 53a5be3868ad51ba04e7ea6d85f4b100, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:58,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 6ee6b625580cb30148c9f8a3bdab9a5f, disabling compactions & flushes 2023-07-18 20:14:58,043 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. after waiting 0 ms 2023-07-18 20:14:58,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,043 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,043 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 6ee6b625580cb30148c9f8a3bdab9a5f: 2023-07-18 20:14:58,044 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 849e823477126f0c7b1ab3d81aa7d5ed, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:14:58,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing a1bbe3ec05a1c5a539f16f0d42fd6e9c, disabling compactions & flushes 2023-07-18 20:14:58,061 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. after waiting 0 ms 2023-07-18 20:14:58,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,061 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for a1bbe3ec05a1c5a539f16f0d42fd6e9c: 2023-07-18 20:14:58,078 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,078 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 53a5be3868ad51ba04e7ea6d85f4b100, disabling compactions & flushes 2023-07-18 20:14:58,078 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,078 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,078 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. after waiting 0 ms 2023-07-18 20:14:58,078 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,078 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,078 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 53a5be3868ad51ba04e7ea6d85f4b100: 2023-07-18 20:14:58,084 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,084 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 849e823477126f0c7b1ab3d81aa7d5ed, disabling compactions & flushes 2023-07-18 20:14:58,084 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,084 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,084 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. after waiting 0 ms 2023-07-18 20:14:58,084 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,084 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,084 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 849e823477126f0c7b1ab3d81aa7d5ed: 2023-07-18 20:14:58,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 20:14:58,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711298089"}]},"ts":"1689711298089"} 2023-07-18 20:14:58,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711298089"}]},"ts":"1689711298089"} 2023-07-18 20:14:58,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711298089"}]},"ts":"1689711298089"} 2023-07-18 20:14:58,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711298089"}]},"ts":"1689711298089"} 2023-07-18 20:14:58,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711298089"}]},"ts":"1689711298089"} 2023-07-18 20:14:58,093 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 20:14:58,094 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711298094"}]},"ts":"1689711298094"} 2023-07-18 20:14:58,096 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 20:14:58,101 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:14:58,101 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:14:58,101 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:14:58,101 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:14:58,101 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, ASSIGN}] 2023-07-18 20:14:58,104 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, ASSIGN 2023-07-18 20:14:58,104 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, ASSIGN 2023-07-18 20:14:58,104 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, ASSIGN 2023-07-18 20:14:58,104 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, ASSIGN 2023-07-18 20:14:58,104 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, ASSIGN 2023-07-18 20:14:58,105 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:58,105 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:14:58,105 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:14:58,106 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:14:58,105 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:14:58,255 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 20:14:58,259 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=849e823477126f0c7b1ab3d81aa7d5ed, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,259 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=6ee6b625580cb30148c9f8a3bdab9a5f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:58,259 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=a1bbe3ec05a1c5a539f16f0d42fd6e9c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,259 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298259"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298259"}]},"ts":"1689711298259"} 2023-07-18 20:14:58,259 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298259"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298259"}]},"ts":"1689711298259"} 2023-07-18 20:14:58,259 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=53a5be3868ad51ba04e7ea6d85f4b100, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:58,259 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298259"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298259"}]},"ts":"1689711298259"} 2023-07-18 20:14:58,259 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=58898cfd2be014c46c3a391817cb5afc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,260 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298259"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298259"}]},"ts":"1689711298259"} 2023-07-18 20:14:58,259 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298259"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298259"}]},"ts":"1689711298259"} 2023-07-18 20:14:58,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure 849e823477126f0c7b1ab3d81aa7d5ed, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:58,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=58, state=RUNNABLE; OpenRegionProcedure a1bbe3ec05a1c5a539f16f0d42fd6e9c, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:58,266 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=57, state=RUNNABLE; OpenRegionProcedure 6ee6b625580cb30148c9f8a3bdab9a5f, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:58,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=56, state=RUNNABLE; OpenRegionProcedure 58898cfd2be014c46c3a391817cb5afc, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:58,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=59, state=RUNNABLE; OpenRegionProcedure 53a5be3868ad51ba04e7ea6d85f4b100, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:58,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 20:14:58,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a1bbe3ec05a1c5a539f16f0d42fd6e9c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 20:14:58,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,423 INFO [StoreOpener-a1bbe3ec05a1c5a539f16f0d42fd6e9c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,424 DEBUG [StoreOpener-a1bbe3ec05a1c5a539f16f0d42fd6e9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/f 2023-07-18 20:14:58,425 DEBUG [StoreOpener-a1bbe3ec05a1c5a539f16f0d42fd6e9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/f 2023-07-18 20:14:58,425 INFO [StoreOpener-a1bbe3ec05a1c5a539f16f0d42fd6e9c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a1bbe3ec05a1c5a539f16f0d42fd6e9c columnFamilyName f 2023-07-18 20:14:58,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 53a5be3868ad51ba04e7ea6d85f4b100, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 20:14:58,426 INFO [StoreOpener-a1bbe3ec05a1c5a539f16f0d42fd6e9c-1] regionserver.HStore(310): Store=a1bbe3ec05a1c5a539f16f0d42fd6e9c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:58,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,427 INFO [StoreOpener-53a5be3868ad51ba04e7ea6d85f4b100-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,429 DEBUG [StoreOpener-53a5be3868ad51ba04e7ea6d85f4b100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/f 2023-07-18 20:14:58,429 DEBUG [StoreOpener-53a5be3868ad51ba04e7ea6d85f4b100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/f 2023-07-18 20:14:58,430 INFO [StoreOpener-53a5be3868ad51ba04e7ea6d85f4b100-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 53a5be3868ad51ba04e7ea6d85f4b100 columnFamilyName f 2023-07-18 20:14:58,430 INFO [StoreOpener-53a5be3868ad51ba04e7ea6d85f4b100-1] regionserver.HStore(310): Store=53a5be3868ad51ba04e7ea6d85f4b100/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:58,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:58,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:58,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a1bbe3ec05a1c5a539f16f0d42fd6e9c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11795482080, jitterRate=0.09853987395763397}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:58,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a1bbe3ec05a1c5a539f16f0d42fd6e9c: 2023-07-18 20:14:58,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c., pid=62, masterSystemTime=1689711298415 2023-07-18 20:14:58,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:58,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:58,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58898cfd2be014c46c3a391817cb5afc, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 20:14:58,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,438 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=a1bbe3ec05a1c5a539f16f0d42fd6e9c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,439 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298438"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711298438"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711298438"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711298438"}]},"ts":"1689711298438"} 2023-07-18 20:14:58,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:58,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 53a5be3868ad51ba04e7ea6d85f4b100; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11787734240, jitterRate=0.0978183001279831}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:58,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 53a5be3868ad51ba04e7ea6d85f4b100: 2023-07-18 20:14:58,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100., pid=65, masterSystemTime=1689711298422 2023-07-18 20:14:58,441 INFO [StoreOpener-58898cfd2be014c46c3a391817cb5afc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,443 DEBUG [StoreOpener-58898cfd2be014c46c3a391817cb5afc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/f 2023-07-18 20:14:58,443 DEBUG [StoreOpener-58898cfd2be014c46c3a391817cb5afc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/f 2023-07-18 20:14:58,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:58,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6ee6b625580cb30148c9f8a3bdab9a5f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 20:14:58,444 INFO [StoreOpener-58898cfd2be014c46c3a391817cb5afc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58898cfd2be014c46c3a391817cb5afc columnFamilyName f 2023-07-18 20:14:58,444 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=53a5be3868ad51ba04e7ea6d85f4b100, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:58,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,444 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711298444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711298444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711298444"}]},"ts":"1689711298444"} 2023-07-18 20:14:58,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=58 2023-07-18 20:14:58,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; OpenRegionProcedure a1bbe3ec05a1c5a539f16f0d42fd6e9c, server=jenkins-hbase4.apache.org,37953,1689711288586 in 179 msec 2023-07-18 20:14:58,446 INFO [StoreOpener-58898cfd2be014c46c3a391817cb5afc-1] regionserver.HStore(310): Store=58898cfd2be014c46c3a391817cb5afc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:58,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, ASSIGN in 344 msec 2023-07-18 20:14:58,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,455 INFO [StoreOpener-6ee6b625580cb30148c9f8a3bdab9a5f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,457 DEBUG [StoreOpener-6ee6b625580cb30148c9f8a3bdab9a5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/f 2023-07-18 20:14:58,457 DEBUG [StoreOpener-6ee6b625580cb30148c9f8a3bdab9a5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/f 2023-07-18 20:14:58,458 INFO [StoreOpener-6ee6b625580cb30148c9f8a3bdab9a5f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6ee6b625580cb30148c9f8a3bdab9a5f columnFamilyName f 2023-07-18 20:14:58,462 INFO [StoreOpener-6ee6b625580cb30148c9f8a3bdab9a5f-1] regionserver.HStore(310): Store=6ee6b625580cb30148c9f8a3bdab9a5f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:58,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:58,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,464 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=59 2023-07-18 20:14:58,464 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=59, state=SUCCESS; OpenRegionProcedure 53a5be3868ad51ba04e7ea6d85f4b100, server=jenkins-hbase4.apache.org,41243,1689711288943 in 177 msec 2023-07-18 20:14:58,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:58,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58898cfd2be014c46c3a391817cb5afc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11487318880, jitterRate=0.06983993947505951}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:58,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58898cfd2be014c46c3a391817cb5afc: 2023-07-18 20:14:58,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:58,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, ASSIGN in 363 msec 2023-07-18 20:14:58,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc., pid=64, masterSystemTime=1689711298415 2023-07-18 20:14:58,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:58,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 849e823477126f0c7b1ab3d81aa7d5ed, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 20:14:58,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:14:58,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,473 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=58898cfd2be014c46c3a391817cb5afc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,473 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298473"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711298473"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711298473"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711298473"}]},"ts":"1689711298473"} 2023-07-18 20:14:58,477 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=56 2023-07-18 20:14:58,477 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=56, state=SUCCESS; OpenRegionProcedure 58898cfd2be014c46c3a391817cb5afc, server=jenkins-hbase4.apache.org,37953,1689711288586 in 207 msec 2023-07-18 20:14:58,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, ASSIGN in 376 msec 2023-07-18 20:14:58,497 INFO [StoreOpener-849e823477126f0c7b1ab3d81aa7d5ed-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:58,499 DEBUG [StoreOpener-849e823477126f0c7b1ab3d81aa7d5ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/f 2023-07-18 20:14:58,499 DEBUG [StoreOpener-849e823477126f0c7b1ab3d81aa7d5ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/f 2023-07-18 20:14:58,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6ee6b625580cb30148c9f8a3bdab9a5f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9731838560, jitterRate=-0.0936519056558609}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:58,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6ee6b625580cb30148c9f8a3bdab9a5f: 2023-07-18 20:14:58,500 INFO [StoreOpener-849e823477126f0c7b1ab3d81aa7d5ed-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 849e823477126f0c7b1ab3d81aa7d5ed columnFamilyName f 2023-07-18 20:14:58,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f., pid=63, masterSystemTime=1689711298422 2023-07-18 20:14:58,501 INFO [StoreOpener-849e823477126f0c7b1ab3d81aa7d5ed-1] regionserver.HStore(310): Store=849e823477126f0c7b1ab3d81aa7d5ed/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:14:58,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,503 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:58,504 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=6ee6b625580cb30148c9f8a3bdab9a5f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:58,504 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298504"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711298504"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711298504"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711298504"}]},"ts":"1689711298504"} 2023-07-18 20:14:58,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:58,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:14:58,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 849e823477126f0c7b1ab3d81aa7d5ed; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11767375840, jitterRate=0.09592227637767792}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:14:58,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 849e823477126f0c7b1ab3d81aa7d5ed: 2023-07-18 20:14:58,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=57 2023-07-18 20:14:58,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=57, state=SUCCESS; OpenRegionProcedure 6ee6b625580cb30148c9f8a3bdab9a5f, server=jenkins-hbase4.apache.org,41243,1689711288943 in 241 msec 2023-07-18 20:14:58,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed., pid=61, masterSystemTime=1689711298415 2023-07-18 20:14:58,518 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, ASSIGN in 415 msec 2023-07-18 20:14:58,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:58,521 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=849e823477126f0c7b1ab3d81aa7d5ed, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,521 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298520"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711298520"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711298520"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711298520"}]},"ts":"1689711298520"} 2023-07-18 20:14:58,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-18 20:14:58,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure 849e823477126f0c7b1ab3d81aa7d5ed, server=jenkins-hbase4.apache.org,37953,1689711288586 in 261 msec 2023-07-18 20:14:58,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-18 20:14:58,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, ASSIGN in 424 msec 2023-07-18 20:14:58,531 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711298531"}]},"ts":"1689711298531"} 2023-07-18 20:14:58,533 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 20:14:58,536 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-18 20:14:58,540 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 767 msec 2023-07-18 20:14:58,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 20:14:58,889 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-18 20:14:58,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:58,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:58,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:58,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:58,893 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:58,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:58,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:58,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-18 20:14:58,901 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711298901"}]},"ts":"1689711298901"} 2023-07-18 20:14:58,903 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 20:14:58,904 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 20:14:58,905 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, UNASSIGN}] 2023-07-18 20:14:58,908 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, UNASSIGN 2023-07-18 20:14:58,908 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, UNASSIGN 2023-07-18 20:14:58,908 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, UNASSIGN 2023-07-18 20:14:58,908 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, UNASSIGN 2023-07-18 20:14:58,909 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, UNASSIGN 2023-07-18 20:14:58,909 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=58898cfd2be014c46c3a391817cb5afc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,909 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=6ee6b625580cb30148c9f8a3bdab9a5f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:58,909 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=a1bbe3ec05a1c5a539f16f0d42fd6e9c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,909 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298909"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298909"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298909"}]},"ts":"1689711298909"} 2023-07-18 20:14:58,909 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298909"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298909"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298909"}]},"ts":"1689711298909"} 2023-07-18 20:14:58,909 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298909"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298909"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298909"}]},"ts":"1689711298909"} 2023-07-18 20:14:58,910 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=53a5be3868ad51ba04e7ea6d85f4b100, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:14:58,910 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711298910"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298910"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298910"}]},"ts":"1689711298910"} 2023-07-18 20:14:58,911 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=849e823477126f0c7b1ab3d81aa7d5ed, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:14:58,911 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711298911"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711298911"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711298911"}]},"ts":"1689711298911"} 2023-07-18 20:14:58,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=69, state=RUNNABLE; CloseRegionProcedure a1bbe3ec05a1c5a539f16f0d42fd6e9c, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:58,913 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=68, state=RUNNABLE; CloseRegionProcedure 6ee6b625580cb30148c9f8a3bdab9a5f, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:58,914 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=67, state=RUNNABLE; CloseRegionProcedure 58898cfd2be014c46c3a391817cb5afc, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:58,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 53a5be3868ad51ba04e7ea6d85f4b100, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:14:58,916 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 849e823477126f0c7b1ab3d81aa7d5ed, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:14:58,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-18 20:14:59,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:59,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58898cfd2be014c46c3a391817cb5afc, disabling compactions & flushes 2023-07-18 20:14:59,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:59,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:59,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. after waiting 0 ms 2023-07-18 20:14:59,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:59,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:59,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 53a5be3868ad51ba04e7ea6d85f4b100, disabling compactions & flushes 2023-07-18 20:14:59,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:59,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:59,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. after waiting 0 ms 2023-07-18 20:14:59,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:59,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:59,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:59,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100. 2023-07-18 20:14:59,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 53a5be3868ad51ba04e7ea6d85f4b100: 2023-07-18 20:14:59,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc. 2023-07-18 20:14:59,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58898cfd2be014c46c3a391817cb5afc: 2023-07-18 20:14:59,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:59,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:59,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6ee6b625580cb30148c9f8a3bdab9a5f, disabling compactions & flushes 2023-07-18 20:14:59,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:59,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:59,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. after waiting 0 ms 2023-07-18 20:14:59,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:59,100 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=53a5be3868ad51ba04e7ea6d85f4b100, regionState=CLOSED 2023-07-18 20:14:59,101 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711299100"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711299100"}]},"ts":"1689711299100"} 2023-07-18 20:14:59,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:59,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:59,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 849e823477126f0c7b1ab3d81aa7d5ed, disabling compactions & flushes 2023-07-18 20:14:59,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:59,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:59,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. after waiting 0 ms 2023-07-18 20:14:59,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:59,103 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=58898cfd2be014c46c3a391817cb5afc, regionState=CLOSED 2023-07-18 20:14:59,104 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711299103"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711299103"}]},"ts":"1689711299103"} 2023-07-18 20:14:59,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:59,109 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-18 20:14:59,109 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 53a5be3868ad51ba04e7ea6d85f4b100, server=jenkins-hbase4.apache.org,41243,1689711288943 in 190 msec 2023-07-18 20:14:59,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f. 2023-07-18 20:14:59,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6ee6b625580cb30148c9f8a3bdab9a5f: 2023-07-18 20:14:59,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:59,110 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=67 2023-07-18 20:14:59,110 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=67, state=SUCCESS; CloseRegionProcedure 58898cfd2be014c46c3a391817cb5afc, server=jenkins-hbase4.apache.org,37953,1689711288586 in 192 msec 2023-07-18 20:14:59,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed. 2023-07-18 20:14:59,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 849e823477126f0c7b1ab3d81aa7d5ed: 2023-07-18 20:14:59,111 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53a5be3868ad51ba04e7ea6d85f4b100, UNASSIGN in 204 msec 2023-07-18 20:14:59,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:59,113 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58898cfd2be014c46c3a391817cb5afc, UNASSIGN in 205 msec 2023-07-18 20:14:59,113 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=6ee6b625580cb30148c9f8a3bdab9a5f, regionState=CLOSED 2023-07-18 20:14:59,113 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711299113"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711299113"}]},"ts":"1689711299113"} 2023-07-18 20:14:59,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:59,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:59,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a1bbe3ec05a1c5a539f16f0d42fd6e9c, disabling compactions & flushes 2023-07-18 20:14:59,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:59,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:59,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. after waiting 0 ms 2023-07-18 20:14:59,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:59,116 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=849e823477126f0c7b1ab3d81aa7d5ed, regionState=CLOSED 2023-07-18 20:14:59,116 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689711299115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711299115"}]},"ts":"1689711299115"} 2023-07-18 20:14:59,124 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=68 2023-07-18 20:14:59,125 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=68, state=SUCCESS; CloseRegionProcedure 6ee6b625580cb30148c9f8a3bdab9a5f, server=jenkins-hbase4.apache.org,41243,1689711288943 in 210 msec 2023-07-18 20:14:59,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:14:59,126 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-18 20:14:59,126 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 849e823477126f0c7b1ab3d81aa7d5ed, server=jenkins-hbase4.apache.org,37953,1689711288586 in 207 msec 2023-07-18 20:14:59,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c. 2023-07-18 20:14:59,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a1bbe3ec05a1c5a539f16f0d42fd6e9c: 2023-07-18 20:14:59,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ee6b625580cb30148c9f8a3bdab9a5f, UNASSIGN in 220 msec 2023-07-18 20:14:59,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:59,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=849e823477126f0c7b1ab3d81aa7d5ed, UNASSIGN in 221 msec 2023-07-18 20:14:59,130 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=a1bbe3ec05a1c5a539f16f0d42fd6e9c, regionState=CLOSED 2023-07-18 20:14:59,130 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689711299130"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711299130"}]},"ts":"1689711299130"} 2023-07-18 20:14:59,135 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-18 20:14:59,135 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; CloseRegionProcedure a1bbe3ec05a1c5a539f16f0d42fd6e9c, server=jenkins-hbase4.apache.org,37953,1689711288586 in 220 msec 2023-07-18 20:14:59,137 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-18 20:14:59,138 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a1bbe3ec05a1c5a539f16f0d42fd6e9c, UNASSIGN in 230 msec 2023-07-18 20:14:59,139 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711299139"}]},"ts":"1689711299139"} 2023-07-18 20:14:59,141 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 20:14:59,143 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 20:14:59,146 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 250 msec 2023-07-18 20:14:59,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-18 20:14:59,201 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-18 20:14:59,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,223 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1360591523' 2023-07-18 20:14:59,225 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:59,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:59,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-18 20:14:59,244 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:59,244 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:59,244 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:59,244 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:59,244 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:59,248 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/recovered.edits] 2023-07-18 20:14:59,256 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/recovered.edits] 2023-07-18 20:14:59,257 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/recovered.edits] 2023-07-18 20:14:59,258 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/recovered.edits] 2023-07-18 20:14:59,262 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/recovered.edits] 2023-07-18 20:14:59,263 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc/recovered.edits/4.seqid 2023-07-18 20:14:59,264 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58898cfd2be014c46c3a391817cb5afc 2023-07-18 20:14:59,272 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c/recovered.edits/4.seqid 2023-07-18 20:14:59,272 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f/recovered.edits/4.seqid 2023-07-18 20:14:59,273 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed/recovered.edits/4.seqid 2023-07-18 20:14:59,273 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a1bbe3ec05a1c5a539f16f0d42fd6e9c 2023-07-18 20:14:59,274 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ee6b625580cb30148c9f8a3bdab9a5f 2023-07-18 20:14:59,274 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/849e823477126f0c7b1ab3d81aa7d5ed 2023-07-18 20:14:59,274 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100/recovered.edits/4.seqid 2023-07-18 20:14:59,275 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53a5be3868ad51ba04e7ea6d85f4b100 2023-07-18 20:14:59,275 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 20:14:59,278 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,284 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 20:14:59,286 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711299289"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711299289"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711299289"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711299289"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:59,289 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711299289"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:59,293 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 20:14:59,293 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 58898cfd2be014c46c3a391817cb5afc, NAME => 'Group_testTableMoveTruncateAndDrop,,1689711297837.58898cfd2be014c46c3a391817cb5afc.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 6ee6b625580cb30148c9f8a3bdab9a5f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689711297837.6ee6b625580cb30148c9f8a3bdab9a5f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => a1bbe3ec05a1c5a539f16f0d42fd6e9c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689711297837.a1bbe3ec05a1c5a539f16f0d42fd6e9c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 53a5be3868ad51ba04e7ea6d85f4b100, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689711297837.53a5be3868ad51ba04e7ea6d85f4b100.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 849e823477126f0c7b1ab3d81aa7d5ed, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689711297837.849e823477126f0c7b1ab3d81aa7d5ed.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 20:14:59,293 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 20:14:59,293 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711299293"}]},"ts":"9223372036854775807"} 2023-07-18 20:14:59,295 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 20:14:59,306 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 20:14:59,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 98 msec 2023-07-18 20:14:59,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-18 20:14:59,344 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-18 20:14:59,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:59,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:59,348 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37953] ipc.CallRunner(144): callId: 164 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:37064 deadline: 1689711359348, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46139 startCode=1689711292506. As of locationSeqNum=6. 2023-07-18 20:14:59,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:14:59,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:14:59,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:59,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup default 2023-07-18 20:14:59,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:59,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:59,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1360591523, current retry=0 2023-07-18 20:14:59,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:59,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1360591523 => default 2023-07-18 20:14:59,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:59,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1360591523 2023-07-18 20:14:59,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:14:59,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:14:59,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:14:59,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:14:59,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:59,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:14:59,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:59,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:14:59,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:14:59,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:14:59,512 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:14:59,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:14:59,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:14:59,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:59,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:14:59,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712499532, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:14:59,533 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:14:59,536 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:59,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,538 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:14:59,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:59,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:59,571 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=496 (was 425) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1976443503-172.31.14.131-1689711282686:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:41212 [Receiving block BP-1976443503-172.31.14.131-1689711282686:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1976443503-172.31.14.131-1689711282686:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67-prefix:jenkins-hbase4.apache.org,46139,1689711292506 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:46139Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46139 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-650985511_17 at /127.0.0.1:36470 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:51518 [Receiving block BP-1976443503-172.31.14.131-1689711282686:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-639-acceptor-0@208f8034-ServerConnector@266cf522{HTTP/1.1, (http/1.1)}{0.0.0.0:35449} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1886578613_17 at /127.0.0.1:51536 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46139-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp611637181-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1976443503-172.31.14.131-1689711282686:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:37087 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:41164 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52937@0x74ae934f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611637181-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:37087 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52937@0x74ae934f-SendThread(127.0.0.1:52937) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:36430 [Receiving block BP-1976443503-172.31.14.131-1689711282686:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3b6361e2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46139 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52937@0x74ae934f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 681) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=370 (was 401), ProcessCount=173 (was 173), AvailableMemoryMB=2726 (was 3389) 2023-07-18 20:14:59,590 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=370, ProcessCount=173, AvailableMemoryMB=2724 2023-07-18 20:14:59,591 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-18 20:14:59,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:14:59,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:14:59,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:59,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:14:59,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:59,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:14:59,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:14:59,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:14:59,619 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:14:59,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:14:59,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:14:59,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:59,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:14:59,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712499637, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:14:59,638 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:14:59,640 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:59,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,647 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:14:59,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:59,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:59,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-18 20:14:59,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:57512 deadline: 1689712499649, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 20:14:59,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-18 20:14:59,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:57512 deadline: 1689712499651, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 20:14:59,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-18 20:14:59,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:57512 deadline: 1689712499653, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 20:14:59,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-18 20:14:59,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-18 20:14:59,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:59,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:59,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:14:59,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:14:59,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:59,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:14:59,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:59,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-18 20:14:59,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:14:59,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:14:59,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:14:59,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:14:59,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:59,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:14:59,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:59,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:14:59,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:14:59,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:14:59,705 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:14:59,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:14:59,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:14:59,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:59,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:14:59,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712499721, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:14:59,721 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:14:59,723 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:59,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,724 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:14:59,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:59,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:59,742 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=499 (was 496) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 792), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=370 (was 370), ProcessCount=173 (was 173), AvailableMemoryMB=2720 (was 2724) 2023-07-18 20:14:59,758 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=499, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=370, ProcessCount=173, AvailableMemoryMB=2719 2023-07-18 20:14:59,759 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-18 20:14:59,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:14:59,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:14:59,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:14:59,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:14:59,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:14:59,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:14:59,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:14:59,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:14:59,779 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:14:59,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:14:59,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:14:59,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:59,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:14:59,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:14:59,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712499794, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:14:59,795 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:14:59,797 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:14:59,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,799 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:14:59,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:59,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:59,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:14:59,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:14:59,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-18 20:14:59,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 20:14:59,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:59,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:14:59,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:14:59,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:14:59,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019] to rsgroup bar 2023-07-18 20:14:59,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:14:59,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 20:14:59,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:14:59,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:14:59,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-18 20:14:59,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 20:14:59,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 20:14:59,828 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 20:14:59,829 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43019,1689711288774, state=CLOSING 2023-07-18 20:14:59,831 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 20:14:59,831 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:14:59,831 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:14:59,986 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 20:14:59,987 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:14:59,987 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:14:59,987 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:14:59,987 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:14:59,987 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:14:59,987 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=43.12 KB heapSize=66.86 KB 2023-07-18 20:15:00,020 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=40.06 KB at sequenceid=98 (bloomFilter=false), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/info/a849948728d84c2db68849a0368d9da3 2023-07-18 20:15:00,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a849948728d84c2db68849a0368d9da3 2023-07-18 20:15:00,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=98 (bloomFilter=false), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/rep_barrier/dc1ea7c5296c4bce85c397380cb40123 2023-07-18 20:15:00,071 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dc1ea7c5296c4bce85c397380cb40123 2023-07-18 20:15:00,089 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=98 (bloomFilter=false), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/table/3a7717b00d3f4fa09bbda87dc308e730 2023-07-18 20:15:00,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a7717b00d3f4fa09bbda87dc308e730 2023-07-18 20:15:00,097 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/info/a849948728d84c2db68849a0368d9da3 as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info/a849948728d84c2db68849a0368d9da3 2023-07-18 20:15:00,107 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a849948728d84c2db68849a0368d9da3 2023-07-18 20:15:00,107 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info/a849948728d84c2db68849a0368d9da3, entries=50, sequenceid=98, filesize=10.6 K 2023-07-18 20:15:00,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/rep_barrier/dc1ea7c5296c4bce85c397380cb40123 as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier/dc1ea7c5296c4bce85c397380cb40123 2023-07-18 20:15:00,125 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dc1ea7c5296c4bce85c397380cb40123 2023-07-18 20:15:00,125 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier/dc1ea7c5296c4bce85c397380cb40123, entries=10, sequenceid=98, filesize=6.1 K 2023-07-18 20:15:00,127 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/table/3a7717b00d3f4fa09bbda87dc308e730 as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table/3a7717b00d3f4fa09bbda87dc308e730 2023-07-18 20:15:00,135 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a7717b00d3f4fa09bbda87dc308e730 2023-07-18 20:15:00,136 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table/3a7717b00d3f4fa09bbda87dc308e730, entries=15, sequenceid=98, filesize=6.2 K 2023-07-18 20:15:00,137 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~43.12 KB/44157, heapSize ~66.81 KB/68416, currentSize=0 B/0 for 1588230740 in 150ms, sequenceid=98, compaction requested=false 2023-07-18 20:15:00,152 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/recovered.edits/101.seqid, newMaxSeqId=101, maxSeqId=1 2023-07-18 20:15:00,153 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:00,154 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:00,154 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:15:00,154 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,46139,1689711292506 record at close sequenceid=98 2023-07-18 20:15:00,156 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 20:15:00,156 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 20:15:00,158 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-18 20:15:00,158 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43019,1689711288774 in 325 msec 2023-07-18 20:15:00,159 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:00,309 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46139,1689711292506, state=OPENING 2023-07-18 20:15:00,317 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 20:15:00,317 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:00,317 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:15:00,474 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 20:15:00,474 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:00,476 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46139%2C1689711292506.meta, suffix=.meta, logDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,46139,1689711292506, archiveDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs, maxLogs=32 2023-07-18 20:15:00,500 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK] 2023-07-18 20:15:00,500 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK] 2023-07-18 20:15:00,500 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK] 2023-07-18 20:15:00,506 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/WALs/jenkins-hbase4.apache.org,46139,1689711292506/jenkins-hbase4.apache.org%2C46139%2C1689711292506.meta.1689711300477.meta 2023-07-18 20:15:00,506 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46743,DS-31032bf6-54fd-47cd-a202-13b53ad166ad,DISK], DatanodeInfoWithStorage[127.0.0.1:40715,DS-904a127d-cae0-4246-b3ff-e88ccf67c32e,DISK], DatanodeInfoWithStorage[127.0.0.1:34903,DS-ba8906de-792b-42d7-9fac-76f4e7644349,DISK]] 2023-07-18 20:15:00,506 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:00,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:15:00,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 20:15:00,507 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 20:15:00,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 20:15:00,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:00,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 20:15:00,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 20:15:00,509 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:15:00,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info 2023-07-18 20:15:00,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info 2023-07-18 20:15:00,511 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:15:00,521 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a849948728d84c2db68849a0368d9da3 2023-07-18 20:15:00,530 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info/a849948728d84c2db68849a0368d9da3 2023-07-18 20:15:00,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:00,531 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:15:00,532 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:00,532 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:00,533 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:15:00,548 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dc1ea7c5296c4bce85c397380cb40123 2023-07-18 20:15:00,548 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier/dc1ea7c5296c4bce85c397380cb40123 2023-07-18 20:15:00,548 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:00,548 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:15:00,549 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table 2023-07-18 20:15:00,549 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table 2023-07-18 20:15:00,550 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:15:00,561 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a7717b00d3f4fa09bbda87dc308e730 2023-07-18 20:15:00,561 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table/3a7717b00d3f4fa09bbda87dc308e730 2023-07-18 20:15:00,561 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:00,562 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740 2023-07-18 20:15:00,564 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740 2023-07-18 20:15:00,567 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:15:00,568 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:15:00,569 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=102; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11021988800, jitterRate=0.02650269865989685}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:15:00,569 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:15:00,570 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=80, masterSystemTime=1689711300469 2023-07-18 20:15:00,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 20:15:00,572 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 20:15:00,573 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46139,1689711292506, state=OPEN 2023-07-18 20:15:00,580 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 20:15:00,580 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:15:00,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-18 20:15:00,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46139,1689711292506 in 263 msec 2023-07-18 20:15:00,584 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 756 msec 2023-07-18 20:15:00,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-18 20:15:00,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943, jenkins-hbase4.apache.org,43019,1689711288774] are moved back to default 2023-07-18 20:15:00,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-18 20:15:00,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:00,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:00,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:00,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 20:15:00,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:00,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:00,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:00,846 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:00,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-18 20:15:00,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 20:15:00,849 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:00,850 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 20:15:00,850 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:00,851 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:00,853 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:00,854 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43019] ipc.CallRunner(144): callId: 193 service: ClientService methodName: Get size: 142 connection: 172.31.14.131:47694 deadline: 1689711360854, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46139 startCode=1689711292506. As of locationSeqNum=98. 2023-07-18 20:15:00,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 20:15:00,956 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:00,958 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 empty. 2023-07-18 20:15:00,958 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:00,958 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 20:15:01,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 20:15:01,392 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:01,393 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 61680decefee71b2dc10459e334cfd72, NAME => 'Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:01,406 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:01,406 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 61680decefee71b2dc10459e334cfd72, disabling compactions & flushes 2023-07-18 20:15:01,406 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,406 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,406 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. after waiting 0 ms 2023-07-18 20:15:01,406 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,406 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,406 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:01,409 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:01,410 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711301410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711301410"}]},"ts":"1689711301410"} 2023-07-18 20:15:01,412 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:01,412 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:01,412 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711301412"}]},"ts":"1689711301412"} 2023-07-18 20:15:01,414 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-18 20:15:01,422 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, ASSIGN}] 2023-07-18 20:15:01,424 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, ASSIGN 2023-07-18 20:15:01,425 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:01,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 20:15:01,577 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:01,577 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711301576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711301576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711301576"}]},"ts":"1689711301576"} 2023-07-18 20:15:01,579 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:01,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61680decefee71b2dc10459e334cfd72, NAME => 'Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:01,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:01,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,737 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,738 DEBUG [StoreOpener-61680decefee71b2dc10459e334cfd72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f 2023-07-18 20:15:01,739 DEBUG [StoreOpener-61680decefee71b2dc10459e334cfd72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f 2023-07-18 20:15:01,739 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61680decefee71b2dc10459e334cfd72 columnFamilyName f 2023-07-18 20:15:01,740 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] regionserver.HStore(310): Store=61680decefee71b2dc10459e334cfd72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:01,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:01,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:01,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 61680decefee71b2dc10459e334cfd72; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11494310880, jitterRate=0.07049112021923065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:01,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:01,747 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72., pid=83, masterSystemTime=1689711301730 2023-07-18 20:15:01,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,748 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:01,749 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:01,749 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711301749"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711301749"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711301749"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711301749"}]},"ts":"1689711301749"} 2023-07-18 20:15:01,752 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-18 20:15:01,752 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506 in 171 msec 2023-07-18 20:15:01,754 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-18 20:15:01,754 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, ASSIGN in 330 msec 2023-07-18 20:15:01,754 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:01,754 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711301754"}]},"ts":"1689711301754"} 2023-07-18 20:15:01,756 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-18 20:15:01,759 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:01,761 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 920 msec 2023-07-18 20:15:01,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 20:15:01,956 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-18 20:15:01,956 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-18 20:15:01,956 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:01,959 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43019] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:47710 deadline: 1689711361959, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46139 startCode=1689711292506. As of locationSeqNum=98. 2023-07-18 20:15:02,071 DEBUG [hconnection-0x422d8bf2-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:02,073 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:02,082 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-18 20:15:02,082 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:02,082 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-18 20:15:02,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-18 20:15:02,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:02,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 20:15:02,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:02,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:02,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-18 20:15:02,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 61680decefee71b2dc10459e334cfd72 to RSGroup bar 2023-07-18 20:15:02,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:02,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:02,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:02,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:02,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 20:15:02,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:02,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE 2023-07-18 20:15:02,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-18 20:15:02,095 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE 2023-07-18 20:15:02,096 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:02,096 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711302096"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711302096"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711302096"}]},"ts":"1689711302096"} 2023-07-18 20:15:02,099 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:02,123 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 20:15:02,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 61680decefee71b2dc10459e334cfd72, disabling compactions & flushes 2023-07-18 20:15:02,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. after waiting 0 ms 2023-07-18 20:15:02,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:02,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:02,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 61680decefee71b2dc10459e334cfd72 move to jenkins-hbase4.apache.org,37953,1689711288586 record at close sequenceid=2 2023-07-18 20:15:02,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,261 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=CLOSED 2023-07-18 20:15:02,261 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711302261"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711302261"}]},"ts":"1689711302261"} 2023-07-18 20:15:02,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-18 20:15:02,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506 in 164 msec 2023-07-18 20:15:02,265 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:15:02,416 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:02,416 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:02,416 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711302416"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711302416"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711302416"}]},"ts":"1689711302416"} 2023-07-18 20:15:02,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:15:02,579 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61680decefee71b2dc10459e334cfd72, NAME => 'Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:02,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:02,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,581 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,582 DEBUG [StoreOpener-61680decefee71b2dc10459e334cfd72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f 2023-07-18 20:15:02,582 DEBUG [StoreOpener-61680decefee71b2dc10459e334cfd72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f 2023-07-18 20:15:02,583 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61680decefee71b2dc10459e334cfd72 columnFamilyName f 2023-07-18 20:15:02,583 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] regionserver.HStore(310): Store=61680decefee71b2dc10459e334cfd72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:02,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:02,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 61680decefee71b2dc10459e334cfd72; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10328680480, jitterRate=-0.038066670298576355}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:02,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:02,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72., pid=86, masterSystemTime=1689711302575 2023-07-18 20:15:02,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,595 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:02,596 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:02,596 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711302595"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711302595"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711302595"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711302595"}]},"ts":"1689711302595"} 2023-07-18 20:15:02,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-18 20:15:02,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,37953,1689711288586 in 179 msec 2023-07-18 20:15:02,602 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE in 507 msec 2023-07-18 20:15:02,960 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-18 20:15:03,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-18 20:15:03,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-18 20:15:03,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:03,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:03,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:03,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 20:15:03,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:03,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 20:15:03,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:03,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:57512 deadline: 1689712503104, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-18 20:15:03,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019] to rsgroup default 2023-07-18 20:15:03,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:03,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:57512 deadline: 1689712503105, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-18 20:15:03,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-18 20:15:03,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:03,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 20:15:03,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:03,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:03,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-18 20:15:03,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 61680decefee71b2dc10459e334cfd72 to RSGroup default 2023-07-18 20:15:03,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE 2023-07-18 20:15:03,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 20:15:03,116 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE 2023-07-18 20:15:03,117 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:03,117 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711303117"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711303117"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711303117"}]},"ts":"1689711303117"} 2023-07-18 20:15:03,121 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:15:03,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 61680decefee71b2dc10459e334cfd72, disabling compactions & flushes 2023-07-18 20:15:03,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. after waiting 0 ms 2023-07-18 20:15:03,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:15:03,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:03,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 61680decefee71b2dc10459e334cfd72 move to jenkins-hbase4.apache.org,46139,1689711292506 record at close sequenceid=5 2023-07-18 20:15:03,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,285 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=CLOSED 2023-07-18 20:15:03,286 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711303285"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711303285"}]},"ts":"1689711303285"} 2023-07-18 20:15:03,289 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-18 20:15:03,289 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,37953,1689711288586 in 169 msec 2023-07-18 20:15:03,290 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:03,441 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:03,441 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711303440"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711303440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711303440"}]},"ts":"1689711303440"} 2023-07-18 20:15:03,443 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:03,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61680decefee71b2dc10459e334cfd72, NAME => 'Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:03,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:03,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,601 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,603 DEBUG [StoreOpener-61680decefee71b2dc10459e334cfd72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f 2023-07-18 20:15:03,603 DEBUG [StoreOpener-61680decefee71b2dc10459e334cfd72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f 2023-07-18 20:15:03,603 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61680decefee71b2dc10459e334cfd72 columnFamilyName f 2023-07-18 20:15:03,604 INFO [StoreOpener-61680decefee71b2dc10459e334cfd72-1] regionserver.HStore(310): Store=61680decefee71b2dc10459e334cfd72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:03,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:03,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 61680decefee71b2dc10459e334cfd72; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11854394400, jitterRate=0.10402651131153107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:03,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:03,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72., pid=89, masterSystemTime=1689711303595 2023-07-18 20:15:03,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,616 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:03,617 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:03,617 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711303617"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711303617"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711303617"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711303617"}]},"ts":"1689711303617"} 2023-07-18 20:15:03,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-18 20:15:03,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506 in 175 msec 2023-07-18 20:15:03,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, REOPEN/MOVE in 510 msec 2023-07-18 20:15:04,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-18 20:15:04,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-18 20:15:04,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:04,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 20:15:04,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:04,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:57512 deadline: 1689712504123, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-18 20:15:04,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019] to rsgroup default 2023-07-18 20:15:04,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 20:15:04,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:04,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-18 20:15:04,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943, jenkins-hbase4.apache.org,43019,1689711288774] are moved back to bar 2023-07-18 20:15:04,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-18 20:15:04,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:04,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 20:15:04,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:15:04,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:04,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,148 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-18 20:15:04,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-18 20:15:04,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 20:15:04,152 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711304152"}]},"ts":"1689711304152"} 2023-07-18 20:15:04,153 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-18 20:15:04,155 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-18 20:15:04,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, UNASSIGN}] 2023-07-18 20:15:04,157 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, UNASSIGN 2023-07-18 20:15:04,157 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:04,158 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711304157"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711304157"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711304157"}]},"ts":"1689711304157"} 2023-07-18 20:15:04,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:04,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 20:15:04,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:04,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 61680decefee71b2dc10459e334cfd72, disabling compactions & flushes 2023-07-18 20:15:04,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:04,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:04,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. after waiting 0 ms 2023-07-18 20:15:04,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:04,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 20:15:04,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72. 2023-07-18 20:15:04,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 61680decefee71b2dc10459e334cfd72: 2023-07-18 20:15:04,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:04,320 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=61680decefee71b2dc10459e334cfd72, regionState=CLOSED 2023-07-18 20:15:04,321 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689711304320"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711304320"}]},"ts":"1689711304320"} 2023-07-18 20:15:04,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-18 20:15:04,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 61680decefee71b2dc10459e334cfd72, server=jenkins-hbase4.apache.org,46139,1689711292506 in 166 msec 2023-07-18 20:15:04,329 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-18 20:15:04,329 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=61680decefee71b2dc10459e334cfd72, UNASSIGN in 172 msec 2023-07-18 20:15:04,330 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711304330"}]},"ts":"1689711304330"} 2023-07-18 20:15:04,331 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-18 20:15:04,333 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-18 20:15:04,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 185 msec 2023-07-18 20:15:04,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 20:15:04,455 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-18 20:15:04,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-18 20:15:04,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,459 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-18 20:15:04,459 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:04,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 20:15:04,466 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:04,468 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits] 2023-07-18 20:15:04,474 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits/10.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72/recovered.edits/10.seqid 2023-07-18 20:15:04,475 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testFailRemoveGroup/61680decefee71b2dc10459e334cfd72 2023-07-18 20:15:04,475 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 20:15:04,478 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,480 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-18 20:15:04,483 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-18 20:15:04,484 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,484 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-18 20:15:04,484 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711304484"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:04,486 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 20:15:04,486 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 61680decefee71b2dc10459e334cfd72, NAME => 'Group_testFailRemoveGroup,,1689711300839.61680decefee71b2dc10459e334cfd72.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 20:15:04,486 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-18 20:15:04,486 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711304486"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:04,493 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-18 20:15:04,495 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 20:15:04,496 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 39 msec 2023-07-18 20:15:04,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 20:15:04,567 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-18 20:15:04,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:04,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:04,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:04,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:04,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:04,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:04,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:04,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:04,589 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:04,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:04,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:04,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:04,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:04,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:04,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712504603, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:04,604 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:04,606 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:04,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,608 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:04,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:04,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:04,632 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=510 (was 499) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67-prefix:jenkins-hbase4.apache.org,46139,1689711292506.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:49610 [Receiving block BP-1976443503-172.31.14.131-1689711282686:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1976443503-172.31.14.131-1689711282686:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-650985511_17 at /127.0.0.1:50508 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:51536 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x422d8bf2-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:34654 [Receiving block BP-1976443503-172.31.14.131-1689711282686:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1583482634_17 at /127.0.0.1:50458 [Receiving block BP-1976443503-172.31.14.131-1689711282686:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1976443503-172.31.14.131-1689711282686:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1976443503-172.31.14.131-1689711282686:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=807 (was 792) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=348 (was 370), ProcessCount=173 (was 173), AvailableMemoryMB=2412 (was 2719) 2023-07-18 20:15:04,632 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-18 20:15:04,655 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=510, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=348, ProcessCount=173, AvailableMemoryMB=2411 2023-07-18 20:15:04,655 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-18 20:15:04,655 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-18 20:15:04,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:04,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:04,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:04,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:04,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:04,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:04,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:04,673 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:04,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:04,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:04,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:04,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:04,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:04,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712504685, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:04,685 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:04,689 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:04,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,691 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:04,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:04,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:04,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:04,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:04,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:04,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:04,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953] to rsgroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:04,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 20:15:04,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586] are moved back to default 2023-07-18 20:15:04,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:04,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:04,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:04,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:04,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:04,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:04,719 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:04,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-18 20:15:04,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 20:15:04,721 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:04,721 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:04,722 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:04,722 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:04,728 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:04,730 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:04,731 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a empty. 2023-07-18 20:15:04,731 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:04,732 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 20:15:04,753 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:04,754 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => aa78d25851dbb471e113517b0121996a, NAME => 'GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:04,767 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:04,768 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing aa78d25851dbb471e113517b0121996a, disabling compactions & flushes 2023-07-18 20:15:04,768 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:04,768 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:04,768 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. after waiting 0 ms 2023-07-18 20:15:04,768 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:04,768 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:04,768 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for aa78d25851dbb471e113517b0121996a: 2023-07-18 20:15:04,771 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:04,772 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711304771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711304771"}]},"ts":"1689711304771"} 2023-07-18 20:15:04,773 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:04,774 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:04,774 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711304774"}]},"ts":"1689711304774"} 2023-07-18 20:15:04,776 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-18 20:15:04,779 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:04,779 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:04,779 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:04,779 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:04,779 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:04,779 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, ASSIGN}] 2023-07-18 20:15:04,784 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, ASSIGN 2023-07-18 20:15:04,785 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:15:04,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 20:15:04,851 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 20:15:04,936 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:04,937 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:04,937 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711304937"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711304937"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711304937"}]},"ts":"1689711304937"} 2023-07-18 20:15:04,939 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:05,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 20:15:05,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:05,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa78d25851dbb471e113517b0121996a, NAME => 'GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:05,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:05,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,100 INFO [StoreOpener-aa78d25851dbb471e113517b0121996a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,102 DEBUG [StoreOpener-aa78d25851dbb471e113517b0121996a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/f 2023-07-18 20:15:05,102 DEBUG [StoreOpener-aa78d25851dbb471e113517b0121996a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/f 2023-07-18 20:15:05,103 INFO [StoreOpener-aa78d25851dbb471e113517b0121996a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa78d25851dbb471e113517b0121996a columnFamilyName f 2023-07-18 20:15:05,103 INFO [StoreOpener-aa78d25851dbb471e113517b0121996a-1] regionserver.HStore(310): Store=aa78d25851dbb471e113517b0121996a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:05,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:05,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:05,304 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa78d25851dbb471e113517b0121996a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9734465120, jitterRate=-0.0934072881937027}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:05,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa78d25851dbb471e113517b0121996a: 2023-07-18 20:15:05,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a., pid=96, masterSystemTime=1689711305092 2023-07-18 20:15:05,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:05,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:05,308 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:05,308 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711305308"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711305308"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711305308"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711305308"}]},"ts":"1689711305308"} 2023-07-18 20:15:05,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-18 20:15:05,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,43019,1689711288774 in 372 msec 2023-07-18 20:15:05,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-18 20:15:05,319 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, ASSIGN in 534 msec 2023-07-18 20:15:05,320 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:05,320 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711305320"}]},"ts":"1689711305320"} 2023-07-18 20:15:05,322 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-18 20:15:05,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 20:15:05,327 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:05,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 611 msec 2023-07-18 20:15:05,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 20:15:05,826 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-18 20:15:05,826 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-18 20:15:05,826 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:05,833 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-18 20:15:05,833 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:05,833 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-18 20:15:05,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:05,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:05,862 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:05,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-18 20:15:05,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 20:15:05,872 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:05,872 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:05,873 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:05,873 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:05,876 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:05,879 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:05,880 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 empty. 2023-07-18 20:15:05,881 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:05,881 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 20:15:05,940 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:05,943 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 756b97317453d57c9c47e14b0a6c9e99, NAME => 'GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:05,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 20:15:05,986 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:05,986 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 756b97317453d57c9c47e14b0a6c9e99, disabling compactions & flushes 2023-07-18 20:15:05,986 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:05,986 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:05,986 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. after waiting 0 ms 2023-07-18 20:15:05,986 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:05,986 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:05,986 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 756b97317453d57c9c47e14b0a6c9e99: 2023-07-18 20:15:05,990 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:05,991 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711305991"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711305991"}]},"ts":"1689711305991"} 2023-07-18 20:15:05,993 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:05,994 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:05,994 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711305994"}]},"ts":"1689711305994"} 2023-07-18 20:15:05,999 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-18 20:15:06,010 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:06,010 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:06,010 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:06,010 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:06,010 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:06,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, ASSIGN}] 2023-07-18 20:15:06,013 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, ASSIGN 2023-07-18 20:15:06,014 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:06,164 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:06,166 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:06,166 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306166"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711306166"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711306166"}]},"ts":"1689711306166"} 2023-07-18 20:15:06,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 20:15:06,168 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:06,330 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 756b97317453d57c9c47e14b0a6c9e99, NAME => 'GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:06,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:06,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,341 INFO [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,345 DEBUG [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/f 2023-07-18 20:15:06,346 DEBUG [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/f 2023-07-18 20:15:06,348 INFO [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 756b97317453d57c9c47e14b0a6c9e99 columnFamilyName f 2023-07-18 20:15:06,349 INFO [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] regionserver.HStore(310): Store=756b97317453d57c9c47e14b0a6c9e99/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:06,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:06,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 756b97317453d57c9c47e14b0a6c9e99; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10640901280, jitterRate=-0.008988842368125916}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:06,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 756b97317453d57c9c47e14b0a6c9e99: 2023-07-18 20:15:06,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99., pid=99, masterSystemTime=1689711306324 2023-07-18 20:15:06,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,370 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:06,371 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306370"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711306370"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711306370"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711306370"}]},"ts":"1689711306370"} 2023-07-18 20:15:06,376 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-18 20:15:06,376 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,46139,1689711292506 in 205 msec 2023-07-18 20:15:06,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-18 20:15:06,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, ASSIGN in 366 msec 2023-07-18 20:15:06,381 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:06,381 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711306381"}]},"ts":"1689711306381"} 2023-07-18 20:15:06,383 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-18 20:15:06,386 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:06,390 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 550 msec 2023-07-18 20:15:06,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 20:15:06,470 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-18 20:15:06,470 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-18 20:15:06,470 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:06,485 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-18 20:15:06,485 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:06,486 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-18 20:15:06,486 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:06,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 20:15:06,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:06,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 20:15:06,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:06,504 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:06,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:06,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:06,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 756b97317453d57c9c47e14b0a6c9e99 to RSGroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, REOPEN/MOVE 2023-07-18 20:15:06,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,523 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, REOPEN/MOVE 2023-07-18 20:15:06,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region aa78d25851dbb471e113517b0121996a to RSGroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:06,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, REOPEN/MOVE 2023-07-18 20:15:06,524 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:06,525 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, REOPEN/MOVE 2023-07-18 20:15:06,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1945548567, current retry=0 2023-07-18 20:15:06,525 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306524"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711306524"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711306524"}]},"ts":"1689711306524"} 2023-07-18 20:15:06,525 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:06,526 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306525"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711306525"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711306525"}]},"ts":"1689711306525"} 2023-07-18 20:15:06,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:06,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:06,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 756b97317453d57c9c47e14b0a6c9e99, disabling compactions & flushes 2023-07-18 20:15:06,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. after waiting 0 ms 2023-07-18 20:15:06,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,681 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:06,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa78d25851dbb471e113517b0121996a, disabling compactions & flushes 2023-07-18 20:15:06,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:06,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:06,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. after waiting 0 ms 2023-07-18 20:15:06,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:06,685 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:06,685 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:06,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:06,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 756b97317453d57c9c47e14b0a6c9e99: 2023-07-18 20:15:06,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 756b97317453d57c9c47e14b0a6c9e99 move to jenkins-hbase4.apache.org,37953,1689711288586 record at close sequenceid=2 2023-07-18 20:15:06,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:06,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa78d25851dbb471e113517b0121996a: 2023-07-18 20:15:06,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aa78d25851dbb471e113517b0121996a move to jenkins-hbase4.apache.org,37953,1689711288586 record at close sequenceid=2 2023-07-18 20:15:06,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:06,689 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=CLOSED 2023-07-18 20:15:06,689 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306689"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711306689"}]},"ts":"1689711306689"} 2023-07-18 20:15:06,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:06,689 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=CLOSED 2023-07-18 20:15:06,690 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306689"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711306689"}]},"ts":"1689711306689"} 2023-07-18 20:15:06,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-18 20:15:06,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-18 20:15:06,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,46139,1689711292506 in 163 msec 2023-07-18 20:15:06,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,43019,1689711288774 in 164 msec 2023-07-18 20:15:06,694 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:15:06,694 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37953,1689711288586; forceNewPlan=false, retain=false 2023-07-18 20:15:06,844 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:06,844 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:06,845 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711306844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711306844"}]},"ts":"1689711306844"} 2023-07-18 20:15:06,845 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711306844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711306844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711306844"}]},"ts":"1689711306844"} 2023-07-18 20:15:06,847 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=101, state=RUNNABLE; OpenRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:15:06,848 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=100, state=RUNNABLE; OpenRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:15:07,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa78d25851dbb471e113517b0121996a, NAME => 'GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:07,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:07,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,031 INFO [StoreOpener-aa78d25851dbb471e113517b0121996a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,035 DEBUG [StoreOpener-aa78d25851dbb471e113517b0121996a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/f 2023-07-18 20:15:07,035 DEBUG [StoreOpener-aa78d25851dbb471e113517b0121996a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/f 2023-07-18 20:15:07,036 INFO [StoreOpener-aa78d25851dbb471e113517b0121996a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa78d25851dbb471e113517b0121996a columnFamilyName f 2023-07-18 20:15:07,037 INFO [StoreOpener-aa78d25851dbb471e113517b0121996a-1] regionserver.HStore(310): Store=aa78d25851dbb471e113517b0121996a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:07,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,047 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa78d25851dbb471e113517b0121996a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11940791840, jitterRate=0.1120728999376297}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:07,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa78d25851dbb471e113517b0121996a: 2023-07-18 20:15:07,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a., pid=104, masterSystemTime=1689711307010 2023-07-18 20:15:07,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:07,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 756b97317453d57c9c47e14b0a6c9e99, NAME => 'GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:07,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:07,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,052 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:07,052 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711307052"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711307052"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711307052"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711307052"}]},"ts":"1689711307052"} 2023-07-18 20:15:07,057 INFO [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,058 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=101 2023-07-18 20:15:07,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=101, state=SUCCESS; OpenRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,37953,1689711288586 in 207 msec 2023-07-18 20:15:07,060 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, REOPEN/MOVE in 536 msec 2023-07-18 20:15:07,063 DEBUG [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/f 2023-07-18 20:15:07,063 DEBUG [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/f 2023-07-18 20:15:07,064 INFO [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 756b97317453d57c9c47e14b0a6c9e99 columnFamilyName f 2023-07-18 20:15:07,065 INFO [StoreOpener-756b97317453d57c9c47e14b0a6c9e99-1] regionserver.HStore(310): Store=756b97317453d57c9c47e14b0a6c9e99/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:07,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:07,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 756b97317453d57c9c47e14b0a6c9e99; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9481344000, jitterRate=-0.11698102951049805}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:07,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 756b97317453d57c9c47e14b0a6c9e99: 2023-07-18 20:15:07,074 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99., pid=105, masterSystemTime=1689711307010 2023-07-18 20:15:07,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:07,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:07,077 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:07,077 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711307077"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711307077"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711307077"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711307077"}]},"ts":"1689711307077"} 2023-07-18 20:15:07,082 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=100 2023-07-18 20:15:07,082 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=100, state=SUCCESS; OpenRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,37953,1689711288586 in 232 msec 2023-07-18 20:15:07,084 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, REOPEN/MOVE in 562 msec 2023-07-18 20:15:07,237 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 20:15:07,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-18 20:15:07,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1945548567. 2023-07-18 20:15:07,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:07,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:07,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:07,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:07,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 20:15:07,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:07,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:07,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:07,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1945548567 2023-07-18 20:15:07,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:07,539 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-18 20:15:07,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-18 20:15:07,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 20:15:07,544 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711307544"}]},"ts":"1689711307544"} 2023-07-18 20:15:07,545 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-18 20:15:07,547 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-18 20:15:07,551 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, UNASSIGN}] 2023-07-18 20:15:07,553 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, UNASSIGN 2023-07-18 20:15:07,553 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:07,553 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711307553"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711307553"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711307553"}]},"ts":"1689711307553"} 2023-07-18 20:15:07,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:15:07,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 20:15:07,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa78d25851dbb471e113517b0121996a, disabling compactions & flushes 2023-07-18 20:15:07,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. after waiting 0 ms 2023-07-18 20:15:07,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:15:07,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a. 2023-07-18 20:15:07,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa78d25851dbb471e113517b0121996a: 2023-07-18 20:15:07,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,716 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=aa78d25851dbb471e113517b0121996a, regionState=CLOSED 2023-07-18 20:15:07,716 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711307716"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711307716"}]},"ts":"1689711307716"} 2023-07-18 20:15:07,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-18 20:15:07,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure aa78d25851dbb471e113517b0121996a, server=jenkins-hbase4.apache.org,37953,1689711288586 in 162 msec 2023-07-18 20:15:07,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-18 20:15:07,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=aa78d25851dbb471e113517b0121996a, UNASSIGN in 172 msec 2023-07-18 20:15:07,725 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711307725"}]},"ts":"1689711307725"} 2023-07-18 20:15:07,726 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-18 20:15:07,728 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-18 20:15:07,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 189 msec 2023-07-18 20:15:07,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 20:15:07,845 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-18 20:15:07,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-18 20:15:07,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,848 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1945548567' 2023-07-18 20:15:07,849 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:07,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:07,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:07,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:07,854 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 20:15:07,856 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/recovered.edits] 2023-07-18 20:15:07,863 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a/recovered.edits/7.seqid 2023-07-18 20:15:07,864 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveA/aa78d25851dbb471e113517b0121996a 2023-07-18 20:15:07,864 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 20:15:07,867 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,874 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-18 20:15:07,876 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-18 20:15:07,877 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,877 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-18 20:15:07,878 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711307877"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:07,879 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 20:15:07,879 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => aa78d25851dbb471e113517b0121996a, NAME => 'GrouptestMultiTableMoveA,,1689711304716.aa78d25851dbb471e113517b0121996a.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 20:15:07,879 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-18 20:15:07,880 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711307880"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:07,882 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-18 20:15:07,884 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 20:15:07,885 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 38 msec 2023-07-18 20:15:07,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 20:15:07,956 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-18 20:15:07,957 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-18 20:15:07,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-18 20:15:07,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:07,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 20:15:07,962 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711307962"}]},"ts":"1689711307962"} 2023-07-18 20:15:07,963 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-18 20:15:07,965 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-18 20:15:07,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, UNASSIGN}] 2023-07-18 20:15:07,967 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, UNASSIGN 2023-07-18 20:15:07,968 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:07,968 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711307968"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711307968"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711307968"}]},"ts":"1689711307968"} 2023-07-18 20:15:07,970 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,37953,1689711288586}] 2023-07-18 20:15:08,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 20:15:08,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:08,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 756b97317453d57c9c47e14b0a6c9e99, disabling compactions & flushes 2023-07-18 20:15:08,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:08,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:08,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. after waiting 0 ms 2023-07-18 20:15:08,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:08,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:15:08,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99. 2023-07-18 20:15:08,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 756b97317453d57c9c47e14b0a6c9e99: 2023-07-18 20:15:08,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:08,131 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=756b97317453d57c9c47e14b0a6c9e99, regionState=CLOSED 2023-07-18 20:15:08,132 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689711308131"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711308131"}]},"ts":"1689711308131"} 2023-07-18 20:15:08,135 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-18 20:15:08,135 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 756b97317453d57c9c47e14b0a6c9e99, server=jenkins-hbase4.apache.org,37953,1689711288586 in 164 msec 2023-07-18 20:15:08,137 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-18 20:15:08,137 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=756b97317453d57c9c47e14b0a6c9e99, UNASSIGN in 170 msec 2023-07-18 20:15:08,137 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711308137"}]},"ts":"1689711308137"} 2023-07-18 20:15:08,139 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-18 20:15:08,140 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-18 20:15:08,142 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 184 msec 2023-07-18 20:15:08,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 20:15:08,264 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-18 20:15:08,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-18 20:15:08,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:08,268 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:08,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1945548567' 2023-07-18 20:15:08,269 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:08,274 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:08,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:08,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,277 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/recovered.edits] 2023-07-18 20:15:08,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 20:15:08,287 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/recovered.edits/7.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99/recovered.edits/7.seqid 2023-07-18 20:15:08,288 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/GrouptestMultiTableMoveB/756b97317453d57c9c47e14b0a6c9e99 2023-07-18 20:15:08,288 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 20:15:08,291 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:08,301 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-18 20:15:08,306 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-18 20:15:08,309 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:08,309 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-18 20:15:08,309 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711308309"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:08,312 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 20:15:08,312 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 756b97317453d57c9c47e14b0a6c9e99, NAME => 'GrouptestMultiTableMoveB,,1689711305836.756b97317453d57c9c47e14b0a6c9e99.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 20:15:08,312 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-18 20:15:08,312 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711308312"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:08,313 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-18 20:15:08,316 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 20:15:08,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 51 msec 2023-07-18 20:15:08,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 20:15:08,384 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-18 20:15:08,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:08,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:08,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:08,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:15:08,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953] to rsgroup default 2023-07-18 20:15:08,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1945548567 2023-07-18 20:15:08,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1945548567, current retry=0 2023-07-18 20:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586] are moved back to Group_testMultiTableMove_1945548567 2023-07-18 20:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1945548567 => default 2023-07-18 20:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1945548567 2023-07-18 20:15:08,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:08,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,414 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:08,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:08,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:08,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:08,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 512 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712508430, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:08,430 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:08,432 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:08,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,433 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:08,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,456 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=508 (was 510), OpenFileDescriptor=783 (was 807), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=352 (was 348) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=2325 (was 2411) 2023-07-18 20:15:08,457 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-18 20:15:08,477 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=508, OpenFileDescriptor=783, MaxFileDescriptor=60000, SystemLoadAverage=352, ProcessCount=173, AvailableMemoryMB=2324 2023-07-18 20:15:08,477 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-18 20:15:08,477 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-18 20:15:08,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:08,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:08,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:08,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,496 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:08,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:08,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:08,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:08,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 540 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712508509, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:08,510 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:08,511 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:08,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,512 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:08,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-18 20:15:08,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup oldGroup 2023-07-18 20:15:08,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 20:15:08,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to default 2023-07-18 20:15:08,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-18 20:15:08,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 20:15:08,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 20:15:08,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-18 20:15:08,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 20:15:08,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:08,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43019] to rsgroup anotherRSGroup 2023-07-18 20:15:08,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 20:15:08,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:08,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 20:15:08,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43019,1689711288774] are moved back to default 2023-07-18 20:15:08,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-18 20:15:08,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 20:15:08,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 20:15:08,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-18 20:15:08,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:57512 deadline: 1689712508579, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-18 20:15:08,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-18 20:15:08,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:57512 deadline: 1689712508581, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-18 20:15:08,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-18 20:15:08,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:57512 deadline: 1689712508582, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-18 20:15:08,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-18 20:15:08,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 580 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:57512 deadline: 1689712508583, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-18 20:15:08,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43019] to rsgroup default 2023-07-18 20:15:08,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 20:15:08,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:08,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-18 20:15:08,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43019,1689711288774] are moved back to anotherRSGroup 2023-07-18 20:15:08,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-18 20:15:08,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-18 20:15:08,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 20:15:08,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup default 2023-07-18 20:15:08,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 20:15:08,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-18 20:15:08,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to oldGroup 2023-07-18 20:15:08,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-18 20:15:08,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-18 20:15:08,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:15:08,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:08,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:08,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:08,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,630 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:08,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:08,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:08,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:08,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 616 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712508640, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:08,641 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:08,643 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:08,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,644 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:08,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,666 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 508) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=783 (was 783), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=352 (was 352), ProcessCount=173 (was 173), AvailableMemoryMB=2321 (was 2324) 2023-07-18 20:15:08,666 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 20:15:08,686 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=512, OpenFileDescriptor=783, MaxFileDescriptor=60000, SystemLoadAverage=352, ProcessCount=173, AvailableMemoryMB=2320 2023-07-18 20:15:08,687 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 20:15:08,687 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-18 20:15:08,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:08,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:08,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:08,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:08,700 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:08,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:08,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:08,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:08,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:08,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 644 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712508709, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:08,710 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:08,711 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:08,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,712 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:08,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:08,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-18 20:15:08,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:08,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:08,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup oldgroup 2023-07-18 20:15:08,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:08,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 20:15:08,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to default 2023-07-18 20:15:08,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-18 20:15:08,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:08,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:08,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:08,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 20:15:08,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:08,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:08,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-18 20:15:08,741 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:08,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-18 20:15:08,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 20:15:08,743 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:08,744 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:08,744 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:08,744 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:08,748 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:08,749 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:08,750 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d empty. 2023-07-18 20:15:08,750 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:08,750 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-18 20:15:08,768 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:08,769 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => f5222ed7b3e1e7231b47206067febb0d, NAME => 'testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:08,781 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:08,781 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing f5222ed7b3e1e7231b47206067febb0d, disabling compactions & flushes 2023-07-18 20:15:08,782 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:08,782 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:08,782 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. after waiting 0 ms 2023-07-18 20:15:08,782 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:08,782 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:08,782 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:08,784 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:08,785 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711308785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711308785"}]},"ts":"1689711308785"} 2023-07-18 20:15:08,786 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:08,787 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:08,787 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711308787"}]},"ts":"1689711308787"} 2023-07-18 20:15:08,788 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-18 20:15:08,791 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:08,791 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:08,792 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:08,792 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:08,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, ASSIGN}] 2023-07-18 20:15:08,794 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, ASSIGN 2023-07-18 20:15:08,794 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:08,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 20:15:08,945 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:08,946 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:08,946 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711308946"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711308946"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711308946"}]},"ts":"1689711308946"} 2023-07-18 20:15:08,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:09,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 20:15:09,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5222ed7b3e1e7231b47206067febb0d, NAME => 'testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:09,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:09,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,105 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,106 DEBUG [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/tr 2023-07-18 20:15:09,106 DEBUG [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/tr 2023-07-18 20:15:09,107 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5222ed7b3e1e7231b47206067febb0d columnFamilyName tr 2023-07-18 20:15:09,107 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] regionserver.HStore(310): Store=f5222ed7b3e1e7231b47206067febb0d/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:09,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:09,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f5222ed7b3e1e7231b47206067febb0d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10321738240, jitterRate=-0.03871321678161621}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:09,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:09,115 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d., pid=116, masterSystemTime=1689711309099 2023-07-18 20:15:09,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,116 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,117 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:09,117 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711309116"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711309116"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711309116"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711309116"}]},"ts":"1689711309116"} 2023-07-18 20:15:09,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-18 20:15:09,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,46139,1689711292506 in 170 msec 2023-07-18 20:15:09,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-18 20:15:09,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, ASSIGN in 327 msec 2023-07-18 20:15:09,124 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:09,124 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711309124"}]},"ts":"1689711309124"} 2023-07-18 20:15:09,125 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-18 20:15:09,128 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:09,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 389 msec 2023-07-18 20:15:09,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 20:15:09,346 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-18 20:15:09,346 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-18 20:15:09,346 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:09,350 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-18 20:15:09,350 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:09,350 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-18 20:15:09,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-18 20:15:09,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:09,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:09,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:09,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:09,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-18 20:15:09,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region f5222ed7b3e1e7231b47206067febb0d to RSGroup oldgroup 2023-07-18 20:15:09,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:09,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:09,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:09,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:09,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:09,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE 2023-07-18 20:15:09,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-18 20:15:09,358 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE 2023-07-18 20:15:09,358 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:09,359 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711309358"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711309358"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711309358"}]},"ts":"1689711309358"} 2023-07-18 20:15:09,360 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:09,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f5222ed7b3e1e7231b47206067febb0d, disabling compactions & flushes 2023-07-18 20:15:09,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. after waiting 0 ms 2023-07-18 20:15:09,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:09,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:09,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f5222ed7b3e1e7231b47206067febb0d move to jenkins-hbase4.apache.org,41243,1689711288943 record at close sequenceid=2 2023-07-18 20:15:09,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,536 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=CLOSED 2023-07-18 20:15:09,536 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711309536"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711309536"}]},"ts":"1689711309536"} 2023-07-18 20:15:09,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-18 20:15:09,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,46139,1689711292506 in 178 msec 2023-07-18 20:15:09,541 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41243,1689711288943; forceNewPlan=false, retain=false 2023-07-18 20:15:09,692 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:09,692 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:09,692 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711309692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711309692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711309692"}]},"ts":"1689711309692"} 2023-07-18 20:15:09,694 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:15:09,853 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5222ed7b3e1e7231b47206067febb0d, NAME => 'testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:09,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:09,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,855 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,856 DEBUG [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/tr 2023-07-18 20:15:09,856 DEBUG [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/tr 2023-07-18 20:15:09,856 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5222ed7b3e1e7231b47206067febb0d columnFamilyName tr 2023-07-18 20:15:09,857 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] regionserver.HStore(310): Store=f5222ed7b3e1e7231b47206067febb0d/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:09,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:09,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f5222ed7b3e1e7231b47206067febb0d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10491283680, jitterRate=-0.022923067212104797}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:09,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:09,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d., pid=119, masterSystemTime=1689711309846 2023-07-18 20:15:09,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:09,865 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:09,865 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711309865"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711309865"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711309865"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711309865"}]},"ts":"1689711309865"} 2023-07-18 20:15:09,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-18 20:15:09,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,41243,1689711288943 in 173 msec 2023-07-18 20:15:09,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE in 511 msec 2023-07-18 20:15:10,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-18 20:15:10,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-18 20:15:10,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:10,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:10,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:10,363 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:10,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 20:15:10,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:10,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 20:15:10,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:10,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 20:15:10,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:10,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:10,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:10,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-18 20:15:10,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:10,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:10,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:10,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:10,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:10,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:10,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:10,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:10,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43019] to rsgroup normal 2023-07-18 20:15:10,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:10,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:10,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:10,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:10,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:10,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 20:15:10,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43019,1689711288774] are moved back to default 2023-07-18 20:15:10,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-18 20:15:10,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:10,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:10,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:10,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 20:15:10,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:10,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:10,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-18 20:15:10,403 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:10,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-18 20:15:10,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 20:15:10,405 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:10,406 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:10,406 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:10,406 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:10,407 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:10,409 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:10,410 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,411 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 empty. 2023-07-18 20:15:10,412 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,412 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-18 20:15:10,430 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:10,431 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9bdaa52a9660aa589eccad822b32b8c6, NAME => 'unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:10,442 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:10,442 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 9bdaa52a9660aa589eccad822b32b8c6, disabling compactions & flushes 2023-07-18 20:15:10,442 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,442 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,442 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. after waiting 0 ms 2023-07-18 20:15:10,442 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,442 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,442 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:10,447 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:10,448 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711310448"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711310448"}]},"ts":"1689711310448"} 2023-07-18 20:15:10,449 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:10,450 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:10,450 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711310450"}]},"ts":"1689711310450"} 2023-07-18 20:15:10,451 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-18 20:15:10,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, ASSIGN}] 2023-07-18 20:15:10,456 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, ASSIGN 2023-07-18 20:15:10,457 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:10,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 20:15:10,608 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:10,609 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711310608"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711310608"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711310608"}]},"ts":"1689711310608"} 2023-07-18 20:15:10,610 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:10,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 20:15:10,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bdaa52a9660aa589eccad822b32b8c6, NAME => 'unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:10,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:10,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,769 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,770 DEBUG [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/ut 2023-07-18 20:15:10,770 DEBUG [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/ut 2023-07-18 20:15:10,771 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bdaa52a9660aa589eccad822b32b8c6 columnFamilyName ut 2023-07-18 20:15:10,772 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] regionserver.HStore(310): Store=9bdaa52a9660aa589eccad822b32b8c6/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:10,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:10,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:10,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bdaa52a9660aa589eccad822b32b8c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9899249280, jitterRate=-0.0780605673789978}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:10,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:10,780 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6., pid=122, masterSystemTime=1689711310762 2023-07-18 20:15:10,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,782 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:10,782 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:10,782 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711310782"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711310782"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711310782"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711310782"}]},"ts":"1689711310782"} 2023-07-18 20:15:10,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-18 20:15:10,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,46139,1689711292506 in 175 msec 2023-07-18 20:15:10,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-18 20:15:10,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, ASSIGN in 332 msec 2023-07-18 20:15:10,789 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:10,789 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711310789"}]},"ts":"1689711310789"} 2023-07-18 20:15:10,790 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-18 20:15:10,792 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:10,793 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 392 msec 2023-07-18 20:15:10,852 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-18 20:15:11,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 20:15:11,008 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-18 20:15:11,009 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-18 20:15:11,009 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:11,012 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-18 20:15:11,013 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:11,013 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-18 20:15:11,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-18 20:15:11,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 20:15:11,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:11,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:11,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:11,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:11,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-18 20:15:11,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 9bdaa52a9660aa589eccad822b32b8c6 to RSGroup normal 2023-07-18 20:15:11,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE 2023-07-18 20:15:11,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-18 20:15:11,023 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE 2023-07-18 20:15:11,024 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:11,024 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711311024"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711311024"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711311024"}]},"ts":"1689711311024"} 2023-07-18 20:15:11,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:11,177 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bdaa52a9660aa589eccad822b32b8c6, disabling compactions & flushes 2023-07-18 20:15:11,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. after waiting 0 ms 2023-07-18 20:15:11,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:11,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:11,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9bdaa52a9660aa589eccad822b32b8c6 move to jenkins-hbase4.apache.org,43019,1689711288774 record at close sequenceid=2 2023-07-18 20:15:11,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,193 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=CLOSED 2023-07-18 20:15:11,193 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711311193"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711311193"}]},"ts":"1689711311193"} 2023-07-18 20:15:11,197 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-18 20:15:11,197 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,46139,1689711292506 in 170 msec 2023-07-18 20:15:11,197 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:15:11,348 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:11,348 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711311348"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711311348"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711311348"}]},"ts":"1689711311348"} 2023-07-18 20:15:11,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:11,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bdaa52a9660aa589eccad822b32b8c6, NAME => 'unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:11,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:11,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,510 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,511 DEBUG [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/ut 2023-07-18 20:15:11,511 DEBUG [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/ut 2023-07-18 20:15:11,512 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bdaa52a9660aa589eccad822b32b8c6 columnFamilyName ut 2023-07-18 20:15:11,513 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] regionserver.HStore(310): Store=9bdaa52a9660aa589eccad822b32b8c6/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:11,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:11,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bdaa52a9660aa589eccad822b32b8c6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9465084160, jitterRate=-0.11849534511566162}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:11,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:11,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6., pid=125, masterSystemTime=1689711311502 2023-07-18 20:15:11,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,521 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:11,521 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:11,521 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711311521"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711311521"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711311521"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711311521"}]},"ts":"1689711311521"} 2023-07-18 20:15:11,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-18 20:15:11,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,43019,1689711288774 in 173 msec 2023-07-18 20:15:11,526 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE in 503 msec 2023-07-18 20:15:12,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-18 20:15:12,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-18 20:15:12,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:12,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:12,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:12,029 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:12,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 20:15:12,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:12,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 20:15:12,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:12,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 20:15:12,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:12,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-18 20:15:12,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:12,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:12,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:12,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:12,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-18 20:15:12,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-18 20:15:12,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:12,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:12,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-18 20:15:12,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:12,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 20:15:12,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:12,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 20:15:12,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:12,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:12,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:12,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-18 20:15:12,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:12,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:12,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:12,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:12,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:12,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-18 20:15:12,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region 9bdaa52a9660aa589eccad822b32b8c6 to RSGroup default 2023-07-18 20:15:12,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE 2023-07-18 20:15:12,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 20:15:12,061 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE 2023-07-18 20:15:12,061 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:12,062 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711312061"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711312061"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711312061"}]},"ts":"1689711312061"} 2023-07-18 20:15:12,063 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:12,188 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 20:15:12,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bdaa52a9660aa589eccad822b32b8c6, disabling compactions & flushes 2023-07-18 20:15:12,217 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. after waiting 0 ms 2023-07-18 20:15:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:15:12,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:12,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9bdaa52a9660aa589eccad822b32b8c6 move to jenkins-hbase4.apache.org,46139,1689711292506 record at close sequenceid=5 2023-07-18 20:15:12,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,228 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=CLOSED 2023-07-18 20:15:12,228 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711312228"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711312228"}]},"ts":"1689711312228"} 2023-07-18 20:15:12,231 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-18 20:15:12,231 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,43019,1689711288774 in 166 msec 2023-07-18 20:15:12,232 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:12,382 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:12,382 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711312382"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711312382"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711312382"}]},"ts":"1689711312382"} 2023-07-18 20:15:12,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:12,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bdaa52a9660aa589eccad822b32b8c6, NAME => 'unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:12,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:12,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,542 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,543 DEBUG [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/ut 2023-07-18 20:15:12,543 DEBUG [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/ut 2023-07-18 20:15:12,543 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bdaa52a9660aa589eccad822b32b8c6 columnFamilyName ut 2023-07-18 20:15:12,543 INFO [StoreOpener-9bdaa52a9660aa589eccad822b32b8c6-1] regionserver.HStore(310): Store=9bdaa52a9660aa589eccad822b32b8c6/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:12,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:12,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bdaa52a9660aa589eccad822b32b8c6; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9944314560, jitterRate=-0.07386353611946106}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:12,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:12,549 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6., pid=128, masterSystemTime=1689711312535 2023-07-18 20:15:12,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,550 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:12,551 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9bdaa52a9660aa589eccad822b32b8c6, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:12,551 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689711312551"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711312551"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711312551"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711312551"}]},"ts":"1689711312551"} 2023-07-18 20:15:12,553 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-18 20:15:12,553 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 9bdaa52a9660aa589eccad822b32b8c6, server=jenkins-hbase4.apache.org,46139,1689711292506 in 168 msec 2023-07-18 20:15:12,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9bdaa52a9660aa589eccad822b32b8c6, REOPEN/MOVE in 493 msec 2023-07-18 20:15:13,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-18 20:15:13,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-18 20:15:13,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:13,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43019] to rsgroup default 2023-07-18 20:15:13,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 20:15:13,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:13,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:13,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:13,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:13,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-18 20:15:13,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43019,1689711288774] are moved back to normal 2023-07-18 20:15:13,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-18 20:15:13,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:13,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-18 20:15:13,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:13,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:13,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:13,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 20:15:13,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:13,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:13,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:13,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:13,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:13,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:13,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:13,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:13,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:13,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:15:13,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:13,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-18 20:15:13,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:13,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:13,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:13,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-18 20:15:13,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(345): Moving region f5222ed7b3e1e7231b47206067febb0d to RSGroup default 2023-07-18 20:15:13,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE 2023-07-18 20:15:13,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 20:15:13,092 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE 2023-07-18 20:15:13,093 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:13,093 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711313093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711313093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711313093"}]},"ts":"1689711313093"} 2023-07-18 20:15:13,094 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,41243,1689711288943}] 2023-07-18 20:15:13,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f5222ed7b3e1e7231b47206067febb0d, disabling compactions & flushes 2023-07-18 20:15:13,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. after waiting 0 ms 2023-07-18 20:15:13,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 20:15:13,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:13,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f5222ed7b3e1e7231b47206067febb0d move to jenkins-hbase4.apache.org,43019,1689711288774 record at close sequenceid=5 2023-07-18 20:15:13,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,255 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=CLOSED 2023-07-18 20:15:13,255 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711313255"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711313255"}]},"ts":"1689711313255"} 2023-07-18 20:15:13,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-18 20:15:13,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,41243,1689711288943 in 162 msec 2023-07-18 20:15:13,258 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:15:13,408 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:13,409 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:13,409 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711313408"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711313408"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711313408"}]},"ts":"1689711313408"} 2023-07-18 20:15:13,411 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:13,565 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5222ed7b3e1e7231b47206067febb0d, NAME => 'testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:13,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:13,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,568 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,569 DEBUG [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/tr 2023-07-18 20:15:13,569 DEBUG [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/tr 2023-07-18 20:15:13,569 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5222ed7b3e1e7231b47206067febb0d columnFamilyName tr 2023-07-18 20:15:13,570 INFO [StoreOpener-f5222ed7b3e1e7231b47206067febb0d-1] regionserver.HStore(310): Store=f5222ed7b3e1e7231b47206067febb0d/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:13,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:13,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f5222ed7b3e1e7231b47206067febb0d; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11684564320, jitterRate=0.0882098525762558}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:13,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:13,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d., pid=131, masterSystemTime=1689711313562 2023-07-18 20:15:13,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,577 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:13,578 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f5222ed7b3e1e7231b47206067febb0d, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:13,578 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689711313578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711313578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711313578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711313578"}]},"ts":"1689711313578"} 2023-07-18 20:15:13,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-18 20:15:13,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure f5222ed7b3e1e7231b47206067febb0d, server=jenkins-hbase4.apache.org,43019,1689711288774 in 168 msec 2023-07-18 20:15:13,583 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f5222ed7b3e1e7231b47206067febb0d, REOPEN/MOVE in 490 msec 2023-07-18 20:15:14,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-18 20:15:14,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-18 20:15:14,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:14,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup default 2023-07-18 20:15:14,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 20:15:14,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:14,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-18 20:15:14,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to newgroup 2023-07-18 20:15:14,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-18 20:15:14,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:14,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-18 20:15:14,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:14,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:14,106 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:14,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:14,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:14,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:14,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:14,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 764 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712514121, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:14,121 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:14,123 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,124 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:14,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:14,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,141 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=506 (was 512), OpenFileDescriptor=777 (was 783), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=372 (was 352) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=2217 (was 2320) 2023-07-18 20:15:14,141 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-18 20:15:14,157 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=506, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=372, ProcessCount=173, AvailableMemoryMB=2217 2023-07-18 20:15:14,157 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-18 20:15:14,157 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-18 20:15:14,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:14,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:14,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:14,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:14,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:14,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:14,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:14,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:14,171 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:14,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:14,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:14,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:14,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:14,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 792 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712514185, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:14,186 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:14,188 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,189 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:14,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:14,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-18 20:15:14,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:14,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-18 20:15:14,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-18 20:15:14,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-18 20:15:14,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-18 20:15:14,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:57512 deadline: 1689712514197, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-18 20:15:14,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-18 20:15:14,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:57512 deadline: 1689712514199, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 20:15:14,201 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-18 20:15:14,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-18 20:15:14,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-18 20:15:14,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 811 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:57512 deadline: 1689712514206, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 20:15:14,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:14,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:14,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:14,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:14,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:14,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:14,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:14,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:14,218 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:14,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:14,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:14,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:14,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:14,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 835 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712514228, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:14,232 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:14,233 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,234 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:14,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:14,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,250 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=510 (was 506) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d5e85f7-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=777 (was 777), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=372 (was 372), ProcessCount=173 (was 173), AvailableMemoryMB=2217 (was 2217) 2023-07-18 20:15:14,251 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-18 20:15:14,266 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=510, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=372, ProcessCount=173, AvailableMemoryMB=2216 2023-07-18 20:15:14,266 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-18 20:15:14,266 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-18 20:15:14,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:14,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:14,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:14,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:14,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:14,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:14,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:14,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:14,279 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:14,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:14,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:14,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:14,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:14,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:14,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 863 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712514290, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:14,290 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:14,292 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,293 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:14,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:14,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:14,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:14,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:14,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:14,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 20:15:14,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to default 2023-07-18 20:15:14,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:14,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:14,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:14,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:14,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:14,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:14,327 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:14,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-18 20:15:14,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 20:15:14,329 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1542437742 2023-07-18 20:15:14,330 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:14,330 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:14,330 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:14,332 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:14,336 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,336 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,336 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,336 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,336 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 empty. 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 empty. 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 empty. 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 empty. 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 empty. 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,337 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,338 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,338 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 20:15:14,351 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:14,353 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8444204116f8512c85a3a74e1458da67, NAME => 'Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:14,353 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => cb6672298c6951f6ca02c5a1d3758d53, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:14,353 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7a79d4b4b721630fe236851eb887be47, NAME => 'Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:14,380 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,380 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing cb6672298c6951f6ca02c5a1d3758d53, disabling compactions & flushes 2023-07-18 20:15:14,380 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,380 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,380 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. after waiting 0 ms 2023-07-18 20:15:14,380 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,380 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,380 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for cb6672298c6951f6ca02c5a1d3758d53: 2023-07-18 20:15:14,381 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 53eb20326da5d9f0e05bf3281963ce89, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:14,382 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,382 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 8444204116f8512c85a3a74e1458da67, disabling compactions & flushes 2023-07-18 20:15:14,382 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,382 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,382 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. after waiting 0 ms 2023-07-18 20:15:14,382 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,382 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,382 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 8444204116f8512c85a3a74e1458da67: 2023-07-18 20:15:14,383 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 617f572cce4c4ecab13421a7c1f03ba8, NAME => 'Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp 2023-07-18 20:15:14,384 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,384 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 7a79d4b4b721630fe236851eb887be47, disabling compactions & flushes 2023-07-18 20:15:14,384 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,384 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,384 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. after waiting 0 ms 2023-07-18 20:15:14,384 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,384 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,384 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 7a79d4b4b721630fe236851eb887be47: 2023-07-18 20:15:14,395 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,396 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 53eb20326da5d9f0e05bf3281963ce89, disabling compactions & flushes 2023-07-18 20:15:14,396 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,396 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,396 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. after waiting 0 ms 2023-07-18 20:15:14,396 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,396 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,396 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 53eb20326da5d9f0e05bf3281963ce89: 2023-07-18 20:15:14,398 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,398 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 617f572cce4c4ecab13421a7c1f03ba8, disabling compactions & flushes 2023-07-18 20:15:14,398 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,398 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,398 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. after waiting 0 ms 2023-07-18 20:15:14,398 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,398 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,398 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 617f572cce4c4ecab13421a7c1f03ba8: 2023-07-18 20:15:14,401 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:14,402 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711314402"}]},"ts":"1689711314402"} 2023-07-18 20:15:14,402 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711314402"}]},"ts":"1689711314402"} 2023-07-18 20:15:14,402 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711314402"}]},"ts":"1689711314402"} 2023-07-18 20:15:14,402 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711314402"}]},"ts":"1689711314402"} 2023-07-18 20:15:14,402 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711314402"}]},"ts":"1689711314402"} 2023-07-18 20:15:14,404 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 20:15:14,405 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:14,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711314405"}]},"ts":"1689711314405"} 2023-07-18 20:15:14,407 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-18 20:15:14,410 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:14,410 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:14,410 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:14,410 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:14,410 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, ASSIGN}] 2023-07-18 20:15:14,412 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, ASSIGN 2023-07-18 20:15:14,412 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, ASSIGN 2023-07-18 20:15:14,412 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, ASSIGN 2023-07-18 20:15:14,412 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, ASSIGN 2023-07-18 20:15:14,413 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:14,413 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, ASSIGN 2023-07-18 20:15:14,413 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:14,413 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:15:14,413 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43019,1689711288774; forceNewPlan=false, retain=false 2023-07-18 20:15:14,414 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46139,1689711292506; forceNewPlan=false, retain=false 2023-07-18 20:15:14,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 20:15:14,563 INFO [jenkins-hbase4:32929] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 20:15:14,567 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=53eb20326da5d9f0e05bf3281963ce89, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:14,567 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=617f572cce4c4ecab13421a7c1f03ba8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,567 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=8444204116f8512c85a3a74e1458da67, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:14,567 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=cb6672298c6951f6ca02c5a1d3758d53, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,567 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=7a79d4b4b721630fe236851eb887be47, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,568 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314567"}]},"ts":"1689711314567"} 2023-07-18 20:15:14,568 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314567"}]},"ts":"1689711314567"} 2023-07-18 20:15:14,567 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314567"}]},"ts":"1689711314567"} 2023-07-18 20:15:14,567 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314567"}]},"ts":"1689711314567"} 2023-07-18 20:15:14,568 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314567"}]},"ts":"1689711314567"} 2023-07-18 20:15:14,573 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=135, state=RUNNABLE; OpenRegionProcedure cb6672298c6951f6ca02c5a1d3758d53, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:14,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure 7a79d4b4b721630fe236851eb887be47, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:14,575 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=136, state=RUNNABLE; OpenRegionProcedure 53eb20326da5d9f0e05bf3281963ce89, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:14,577 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=137, state=RUNNABLE; OpenRegionProcedure 617f572cce4c4ecab13421a7c1f03ba8, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:14,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=133, state=RUNNABLE; OpenRegionProcedure 8444204116f8512c85a3a74e1458da67, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:14,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 20:15:14,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 617f572cce4c4ecab13421a7c1f03ba8, NAME => 'Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 20:15:14,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,732 INFO [StoreOpener-617f572cce4c4ecab13421a7c1f03ba8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,734 DEBUG [StoreOpener-617f572cce4c4ecab13421a7c1f03ba8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/f 2023-07-18 20:15:14,734 DEBUG [StoreOpener-617f572cce4c4ecab13421a7c1f03ba8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/f 2023-07-18 20:15:14,734 INFO [StoreOpener-617f572cce4c4ecab13421a7c1f03ba8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 617f572cce4c4ecab13421a7c1f03ba8 columnFamilyName f 2023-07-18 20:15:14,735 INFO [StoreOpener-617f572cce4c4ecab13421a7c1f03ba8-1] regionserver.HStore(310): Store=617f572cce4c4ecab13421a7c1f03ba8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:14,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8444204116f8512c85a3a74e1458da67, NAME => 'Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 20:15:14,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,741 INFO [StoreOpener-8444204116f8512c85a3a74e1458da67-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:14,743 DEBUG [StoreOpener-8444204116f8512c85a3a74e1458da67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/f 2023-07-18 20:15:14,743 DEBUG [StoreOpener-8444204116f8512c85a3a74e1458da67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/f 2023-07-18 20:15:14,743 INFO [StoreOpener-8444204116f8512c85a3a74e1458da67-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8444204116f8512c85a3a74e1458da67 columnFamilyName f 2023-07-18 20:15:14,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:14,744 INFO [StoreOpener-8444204116f8512c85a3a74e1458da67-1] regionserver.HStore(310): Store=8444204116f8512c85a3a74e1458da67/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:14,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 617f572cce4c4ecab13421a7c1f03ba8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9413634400, jitterRate=-0.12328697741031647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:14,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 617f572cce4c4ecab13421a7c1f03ba8: 2023-07-18 20:15:14,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8., pid=141, masterSystemTime=1689711314726 2023-07-18 20:15:14,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,747 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:14,747 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb6672298c6951f6ca02c5a1d3758d53, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 20:15:14,747 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=617f572cce4c4ecab13421a7c1f03ba8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,747 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314747"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711314747"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711314747"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711314747"}]},"ts":"1689711314747"} 2023-07-18 20:15:14,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:14,749 INFO [StoreOpener-cb6672298c6951f6ca02c5a1d3758d53-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,750 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=137 2023-07-18 20:15:14,750 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; OpenRegionProcedure 617f572cce4c4ecab13421a7c1f03ba8, server=jenkins-hbase4.apache.org,46139,1689711292506 in 171 msec 2023-07-18 20:15:14,750 DEBUG [StoreOpener-cb6672298c6951f6ca02c5a1d3758d53-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/f 2023-07-18 20:15:14,750 DEBUG [StoreOpener-cb6672298c6951f6ca02c5a1d3758d53-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/f 2023-07-18 20:15:14,751 INFO [StoreOpener-cb6672298c6951f6ca02c5a1d3758d53-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb6672298c6951f6ca02c5a1d3758d53 columnFamilyName f 2023-07-18 20:15:14,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:14,751 INFO [StoreOpener-cb6672298c6951f6ca02c5a1d3758d53-1] regionserver.HStore(310): Store=cb6672298c6951f6ca02c5a1d3758d53/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:14,752 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8444204116f8512c85a3a74e1458da67; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10189658240, jitterRate=-0.05101412534713745}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:14,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8444204116f8512c85a3a74e1458da67: 2023-07-18 20:15:14,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,755 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67., pid=142, masterSystemTime=1689711314735 2023-07-18 20:15:14,755 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, ASSIGN in 340 msec 2023-07-18 20:15:14,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:14,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:14,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 53eb20326da5d9f0e05bf3281963ce89, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 20:15:14,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,757 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=8444204116f8512c85a3a74e1458da67, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:14,757 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314757"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711314757"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711314757"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711314757"}]},"ts":"1689711314757"} 2023-07-18 20:15:14,760 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=133 2023-07-18 20:15:14,760 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=133, state=SUCCESS; OpenRegionProcedure 8444204116f8512c85a3a74e1458da67, server=jenkins-hbase4.apache.org,43019,1689711288774 in 181 msec 2023-07-18 20:15:14,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, ASSIGN in 350 msec 2023-07-18 20:15:14,763 INFO [StoreOpener-53eb20326da5d9f0e05bf3281963ce89-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:14,764 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb6672298c6951f6ca02c5a1d3758d53; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10972579840, jitterRate=0.02190113067626953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:14,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb6672298c6951f6ca02c5a1d3758d53: 2023-07-18 20:15:14,764 DEBUG [StoreOpener-53eb20326da5d9f0e05bf3281963ce89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/f 2023-07-18 20:15:14,764 DEBUG [StoreOpener-53eb20326da5d9f0e05bf3281963ce89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/f 2023-07-18 20:15:14,764 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53., pid=138, masterSystemTime=1689711314726 2023-07-18 20:15:14,765 INFO [StoreOpener-53eb20326da5d9f0e05bf3281963ce89-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 53eb20326da5d9f0e05bf3281963ce89 columnFamilyName f 2023-07-18 20:15:14,765 INFO [StoreOpener-53eb20326da5d9f0e05bf3281963ce89-1] regionserver.HStore(310): Store=53eb20326da5d9f0e05bf3281963ce89/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:14,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:14,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a79d4b4b721630fe236851eb887be47, NAME => 'Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 20:15:14,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:14,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,767 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=cb6672298c6951f6ca02c5a1d3758d53, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,767 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314767"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711314767"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711314767"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711314767"}]},"ts":"1689711314767"} 2023-07-18 20:15:14,768 INFO [StoreOpener-7a79d4b4b721630fe236851eb887be47-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,769 DEBUG [StoreOpener-7a79d4b4b721630fe236851eb887be47-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/f 2023-07-18 20:15:14,770 DEBUG [StoreOpener-7a79d4b4b721630fe236851eb887be47-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/f 2023-07-18 20:15:14,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:14,770 INFO [StoreOpener-7a79d4b4b721630fe236851eb887be47-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a79d4b4b721630fe236851eb887be47 columnFamilyName f 2023-07-18 20:15:14,771 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=135 2023-07-18 20:15:14,771 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; OpenRegionProcedure cb6672298c6951f6ca02c5a1d3758d53, server=jenkins-hbase4.apache.org,46139,1689711292506 in 196 msec 2023-07-18 20:15:14,771 INFO [StoreOpener-7a79d4b4b721630fe236851eb887be47-1] regionserver.HStore(310): Store=7a79d4b4b721630fe236851eb887be47/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:14,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, ASSIGN in 361 msec 2023-07-18 20:15:14,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:14,773 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 53eb20326da5d9f0e05bf3281963ce89; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11951034560, jitterRate=0.11302682757377625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:14,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 53eb20326da5d9f0e05bf3281963ce89: 2023-07-18 20:15:14,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89., pid=140, masterSystemTime=1689711314735 2023-07-18 20:15:14,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:14,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:14,775 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=53eb20326da5d9f0e05bf3281963ce89, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:14,775 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314775"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711314775"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711314775"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711314775"}]},"ts":"1689711314775"} 2023-07-18 20:15:14,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:14,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a79d4b4b721630fe236851eb887be47; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11577939680, jitterRate=0.07827965915203094}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:14,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a79d4b4b721630fe236851eb887be47: 2023-07-18 20:15:14,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47., pid=139, masterSystemTime=1689711314726 2023-07-18 20:15:14,779 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=136 2023-07-18 20:15:14,779 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=136, state=SUCCESS; OpenRegionProcedure 53eb20326da5d9f0e05bf3281963ce89, server=jenkins-hbase4.apache.org,43019,1689711288774 in 202 msec 2023-07-18 20:15:14,780 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, ASSIGN in 369 msec 2023-07-18 20:15:14,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,780 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:14,780 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=7a79d4b4b721630fe236851eb887be47, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,780 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314780"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711314780"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711314780"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711314780"}]},"ts":"1689711314780"} 2023-07-18 20:15:14,783 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-18 20:15:14,783 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure 7a79d4b4b721630fe236851eb887be47, server=jenkins-hbase4.apache.org,46139,1689711292506 in 207 msec 2023-07-18 20:15:14,784 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-18 20:15:14,784 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, ASSIGN in 373 msec 2023-07-18 20:15:14,785 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:14,785 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711314785"}]},"ts":"1689711314785"} 2023-07-18 20:15:14,786 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-18 20:15:14,790 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:14,791 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 466 msec 2023-07-18 20:15:14,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 20:15:14,932 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-18 20:15:14,933 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-18 20:15:14,933 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,938 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-18 20:15:14,939 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,939 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-18 20:15:14,939 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:14,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 20:15:14,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:14,949 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 20:15:14,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 20:15:14,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:14,956 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711314956"}]},"ts":"1689711314956"} 2023-07-18 20:15:14,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 20:15:14,957 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-18 20:15:14,959 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-18 20:15:14,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, UNASSIGN}] 2023-07-18 20:15:14,961 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-18 20:15:14,962 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, UNASSIGN 2023-07-18 20:15:14,962 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, UNASSIGN 2023-07-18 20:15:14,962 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, UNASSIGN 2023-07-18 20:15:14,963 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, UNASSIGN 2023-07-18 20:15:14,963 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-18 20:15:14,963 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, UNASSIGN 2023-07-18 20:15:14,963 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=53eb20326da5d9f0e05bf3281963ce89, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:14,963 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=cb6672298c6951f6ca02c5a1d3758d53, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,963 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314963"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314963"}]},"ts":"1689711314963"} 2023-07-18 20:15:14,964 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314963"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314963"}]},"ts":"1689711314963"} 2023-07-18 20:15:14,963 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=8444204116f8512c85a3a74e1458da67, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:14,964 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314963"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314963"}]},"ts":"1689711314963"} 2023-07-18 20:15:14,964 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=7a79d4b4b721630fe236851eb887be47, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,964 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711314964"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314964"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314964"}]},"ts":"1689711314964"} 2023-07-18 20:15:14,964 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=617f572cce4c4ecab13421a7c1f03ba8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:14,965 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711314964"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711314964"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711314964"}]},"ts":"1689711314964"} 2023-07-18 20:15:14,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 53eb20326da5d9f0e05bf3281963ce89, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:14,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=146, state=RUNNABLE; CloseRegionProcedure cb6672298c6951f6ca02c5a1d3758d53, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:14,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=144, state=RUNNABLE; CloseRegionProcedure 8444204116f8512c85a3a74e1458da67, server=jenkins-hbase4.apache.org,43019,1689711288774}] 2023-07-18 20:15:14,968 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=145, state=RUNNABLE; CloseRegionProcedure 7a79d4b4b721630fe236851eb887be47, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:14,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 617f572cce4c4ecab13421a7c1f03ba8, server=jenkins-hbase4.apache.org,46139,1689711292506}] 2023-07-18 20:15:15,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 20:15:15,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:15,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:15,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb6672298c6951f6ca02c5a1d3758d53, disabling compactions & flushes 2023-07-18 20:15:15,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 53eb20326da5d9f0e05bf3281963ce89, disabling compactions & flushes 2023-07-18 20:15:15,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:15,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:15,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:15,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:15,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. after waiting 0 ms 2023-07-18 20:15:15,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. after waiting 0 ms 2023-07-18 20:15:15,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:15,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:15,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:15,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:15,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89. 2023-07-18 20:15:15,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 53eb20326da5d9f0e05bf3281963ce89: 2023-07-18 20:15:15,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53. 2023-07-18 20:15:15,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb6672298c6951f6ca02c5a1d3758d53: 2023-07-18 20:15:15,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:15,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:15,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8444204116f8512c85a3a74e1458da67, disabling compactions & flushes 2023-07-18 20:15:15,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:15,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:15,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. after waiting 0 ms 2023-07-18 20:15:15,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:15,131 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=53eb20326da5d9f0e05bf3281963ce89, regionState=CLOSED 2023-07-18 20:15:15,131 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711315131"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711315131"}]},"ts":"1689711315131"} 2023-07-18 20:15:15,133 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=cb6672298c6951f6ca02c5a1d3758d53, regionState=CLOSED 2023-07-18 20:15:15,133 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711315133"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711315133"}]},"ts":"1689711315133"} 2023-07-18 20:15:15,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:15,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:15,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a79d4b4b721630fe236851eb887be47, disabling compactions & flushes 2023-07-18 20:15:15,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:15,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:15,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. after waiting 0 ms 2023-07-18 20:15:15,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:15,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=146 2023-07-18 20:15:15,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; CloseRegionProcedure cb6672298c6951f6ca02c5a1d3758d53, server=jenkins-hbase4.apache.org,46139,1689711292506 in 172 msec 2023-07-18 20:15:15,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-18 20:15:15,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 53eb20326da5d9f0e05bf3281963ce89, server=jenkins-hbase4.apache.org,43019,1689711288774 in 172 msec 2023-07-18 20:15:15,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:15,142 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cb6672298c6951f6ca02c5a1d3758d53, UNASSIGN in 181 msec 2023-07-18 20:15:15,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=53eb20326da5d9f0e05bf3281963ce89, UNASSIGN in 181 msec 2023-07-18 20:15:15,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:15,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67. 2023-07-18 20:15:15,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8444204116f8512c85a3a74e1458da67: 2023-07-18 20:15:15,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47. 2023-07-18 20:15:15,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a79d4b4b721630fe236851eb887be47: 2023-07-18 20:15:15,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:15,146 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=8444204116f8512c85a3a74e1458da67, regionState=CLOSED 2023-07-18 20:15:15,146 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711315146"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711315146"}]},"ts":"1689711315146"} 2023-07-18 20:15:15,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:15,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:15,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 617f572cce4c4ecab13421a7c1f03ba8, disabling compactions & flushes 2023-07-18 20:15:15,148 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:15,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:15,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. after waiting 0 ms 2023-07-18 20:15:15,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:15,149 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=7a79d4b4b721630fe236851eb887be47, regionState=CLOSED 2023-07-18 20:15:15,149 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689711315149"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711315149"}]},"ts":"1689711315149"} 2023-07-18 20:15:15,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=144 2023-07-18 20:15:15,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=144, state=SUCCESS; CloseRegionProcedure 8444204116f8512c85a3a74e1458da67, server=jenkins-hbase4.apache.org,43019,1689711288774 in 186 msec 2023-07-18 20:15:15,157 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=145 2023-07-18 20:15:15,157 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=145, state=SUCCESS; CloseRegionProcedure 7a79d4b4b721630fe236851eb887be47, server=jenkins-hbase4.apache.org,46139,1689711292506 in 186 msec 2023-07-18 20:15:15,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8444204116f8512c85a3a74e1458da67, UNASSIGN in 197 msec 2023-07-18 20:15:15,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7a79d4b4b721630fe236851eb887be47, UNASSIGN in 197 msec 2023-07-18 20:15:15,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:15,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8. 2023-07-18 20:15:15,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 617f572cce4c4ecab13421a7c1f03ba8: 2023-07-18 20:15:15,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:15,166 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=617f572cce4c4ecab13421a7c1f03ba8, regionState=CLOSED 2023-07-18 20:15:15,166 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689711315166"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711315166"}]},"ts":"1689711315166"} 2023-07-18 20:15:15,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-18 20:15:15,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 617f572cce4c4ecab13421a7c1f03ba8, server=jenkins-hbase4.apache.org,46139,1689711292506 in 198 msec 2023-07-18 20:15:15,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-18 20:15:15,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=617f572cce4c4ecab13421a7c1f03ba8, UNASSIGN in 210 msec 2023-07-18 20:15:15,176 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711315176"}]},"ts":"1689711315176"} 2023-07-18 20:15:15,178 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-18 20:15:15,179 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-18 20:15:15,182 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 231 msec 2023-07-18 20:15:15,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 20:15:15,260 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-18 20:15:15,260 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:15,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:15,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-18 20:15:15,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1542437742, current retry=0 2023-07-18 20:15:15,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1542437742. 2023-07-18 20:15:15,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:15,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 20:15:15,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:15,280 INFO [Listener at localhost/39395] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 20:15:15,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 20:15:15,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:15,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 923 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:57512 deadline: 1689711375280, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-18 20:15:15,282 DEBUG [Listener at localhost/39395] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-18 20:15:15,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-18 20:15:15,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:15,287 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:15,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1542437742' 2023-07-18 20:15:15,288 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:15,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:15,301 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:15,301 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:15,304 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:15,304 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:15,304 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:15,307 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/recovered.edits] 2023-07-18 20:15:15,309 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/recovered.edits] 2023-07-18 20:15:15,309 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/recovered.edits] 2023-07-18 20:15:15,311 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/recovered.edits] 2023-07-18 20:15:15,311 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/f, FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/recovered.edits] 2023-07-18 20:15:15,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:15,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-18 20:15:15,339 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67/recovered.edits/4.seqid 2023-07-18 20:15:15,339 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89/recovered.edits/4.seqid 2023-07-18 20:15:15,339 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53/recovered.edits/4.seqid 2023-07-18 20:15:15,340 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/cb6672298c6951f6ca02c5a1d3758d53 2023-07-18 20:15:15,341 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8/recovered.edits/4.seqid 2023-07-18 20:15:15,341 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/53eb20326da5d9f0e05bf3281963ce89 2023-07-18 20:15:15,341 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/8444204116f8512c85a3a74e1458da67 2023-07-18 20:15:15,341 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/recovered.edits/4.seqid to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/archive/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47/recovered.edits/4.seqid 2023-07-18 20:15:15,342 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/617f572cce4c4ecab13421a7c1f03ba8 2023-07-18 20:15:15,343 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/.tmp/data/default/Group_testDisabledTableMove/7a79d4b4b721630fe236851eb887be47 2023-07-18 20:15:15,343 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 20:15:15,346 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:15,348 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-18 20:15:15,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711315356"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711315356"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711315356"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711315356"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:15,356 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711315356"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:15,359 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 20:15:15,359 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8444204116f8512c85a3a74e1458da67, NAME => 'Group_testDisabledTableMove,,1689711314324.8444204116f8512c85a3a74e1458da67.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7a79d4b4b721630fe236851eb887be47, NAME => 'Group_testDisabledTableMove,aaaaa,1689711314324.7a79d4b4b721630fe236851eb887be47.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => cb6672298c6951f6ca02c5a1d3758d53, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689711314324.cb6672298c6951f6ca02c5a1d3758d53.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 53eb20326da5d9f0e05bf3281963ce89, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689711314324.53eb20326da5d9f0e05bf3281963ce89.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 617f572cce4c4ecab13421a7c1f03ba8, NAME => 'Group_testDisabledTableMove,zzzzz,1689711314324.617f572cce4c4ecab13421a7c1f03ba8.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 20:15:15,359 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-18 20:15:15,359 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711315359"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:15,361 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-18 20:15:15,363 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 20:15:15,364 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 80 msec 2023-07-18 20:15:15,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-18 20:15:15,437 INFO [Listener at localhost/39395] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-18 20:15:15,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:15,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:15,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:15,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243] to rsgroup default 2023-07-18 20:15:15,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:15,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:15,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1542437742, current retry=0 2023-07-18 20:15:15,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37953,1689711288586, jenkins-hbase4.apache.org,41243,1689711288943] are moved back to Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1542437742 => default 2023-07-18 20:15:15,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:15,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1542437742 2023-07-18 20:15:15,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:15,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:15:15,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:15,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:15,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:15,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:15,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:15,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:15,463 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:15,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:15,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:15,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:15,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:15,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:15,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:15,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 957 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712515472, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:15,473 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:15,475 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:15,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,476 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:15,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:15,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:15,495 INFO [Listener at localhost/39395] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514 (was 510) Potentially hanging thread: hconnection-0x422d8bf2-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1807334389_17 at /127.0.0.1:46956 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=798 (was 777) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 372) - SystemLoadAverage LEAK? -, ProcessCount=171 (was 173), AvailableMemoryMB=4400 (was 2216) - AvailableMemoryMB LEAK? - 2023-07-18 20:15:15,496 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 20:15:15,517 INFO [Listener at localhost/39395] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=514, OpenFileDescriptor=798, MaxFileDescriptor=60000, SystemLoadAverage=374, ProcessCount=171, AvailableMemoryMB=4405 2023-07-18 20:15:15,517 WARN [Listener at localhost/39395] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 20:15:15,517 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-18 20:15:15,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:15,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:15,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:15,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:15,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:15,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:15,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:15,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:15,532 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:15,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:15,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:15,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:15,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:15,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:15,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32929] to rsgroup master 2023-07-18 20:15:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] ipc.CallRunner(144): callId: 985 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57512 deadline: 1689712515551, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. 2023-07-18 20:15:15,552 WARN [Listener at localhost/39395] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32929 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:15,554 INFO [Listener at localhost/39395] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:15,555 INFO [Listener at localhost/39395] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37953, jenkins-hbase4.apache.org:41243, jenkins-hbase4.apache.org:43019, jenkins-hbase4.apache.org:46139], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:15,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:15,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32929] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:15,557 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 20:15:15,557 INFO [Listener at localhost/39395] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 20:15:15,557 DEBUG [Listener at localhost/39395] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b32111a to 127.0.0.1:52937 2023-07-18 20:15:15,557 DEBUG [Listener at localhost/39395] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,558 DEBUG [Listener at localhost/39395] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 20:15:15,559 DEBUG [Listener at localhost/39395] util.JVMClusterUtil(257): Found active master hash=1507873250, stopped=false 2023-07-18 20:15:15,559 DEBUG [Listener at localhost/39395] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 20:15:15,559 DEBUG [Listener at localhost/39395] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 20:15:15,559 INFO [Listener at localhost/39395] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:15:15,562 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:15,562 INFO [Listener at localhost/39395] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 20:15:15,562 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:15,562 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:15,563 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:15,562 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:15,563 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:15,563 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:15,563 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:15,563 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:15,563 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:15,563 DEBUG [Listener at localhost/39395] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2636192b to 127.0.0.1:52937 2023-07-18 20:15:15,563 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:15,564 DEBUG [Listener at localhost/39395] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,565 INFO [Listener at localhost/39395] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37953,1689711288586' ***** 2023-07-18 20:15:15,565 INFO [Listener at localhost/39395] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:15,565 INFO [Listener at localhost/39395] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43019,1689711288774' ***** 2023-07-18 20:15:15,565 INFO [Listener at localhost/39395] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:15,565 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:15,565 INFO [Listener at localhost/39395] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41243,1689711288943' ***** 2023-07-18 20:15:15,566 INFO [Listener at localhost/39395] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:15,565 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:15,566 INFO [Listener at localhost/39395] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46139,1689711292506' ***** 2023-07-18 20:15:15,566 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:15,566 INFO [Listener at localhost/39395] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:15,570 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:15,586 INFO [RS:1;jenkins-hbase4:43019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@67816748{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:15,586 INFO [RS:0;jenkins-hbase4:37953] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40f2000c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:15,586 INFO [RS:3;jenkins-hbase4:46139] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@201e20bc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:15,586 INFO [RS:2;jenkins-hbase4:41243] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38a30dd7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:15,591 INFO [RS:1;jenkins-hbase4:43019] server.AbstractConnector(383): Stopped ServerConnector@1ba8dae2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:15,591 INFO [RS:2;jenkins-hbase4:41243] server.AbstractConnector(383): Stopped ServerConnector@16240f3c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:15,591 INFO [RS:1;jenkins-hbase4:43019] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:15,591 INFO [RS:0;jenkins-hbase4:37953] server.AbstractConnector(383): Stopped ServerConnector@22a36f37{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:15,591 INFO [RS:3;jenkins-hbase4:46139] server.AbstractConnector(383): Stopped ServerConnector@266cf522{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:15,592 INFO [RS:1;jenkins-hbase4:43019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46770358{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:15,592 INFO [RS:0;jenkins-hbase4:37953] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:15,591 INFO [RS:2;jenkins-hbase4:41243] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:15,593 INFO [RS:1;jenkins-hbase4:43019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:15,594 INFO [RS:0;jenkins-hbase4:37953] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c80b18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:15,592 INFO [RS:3;jenkins-hbase4:46139] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:15,595 INFO [RS:0;jenkins-hbase4:37953] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:15,594 INFO [RS:2;jenkins-hbase4:41243] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7431440f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:15,596 INFO [RS:3;jenkins-hbase4:46139] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6381a2d2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:15,597 INFO [RS:2;jenkins-hbase4:41243] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:15,597 INFO [RS:3;jenkins-hbase4:46139] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b0143bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:15,599 INFO [RS:0;jenkins-hbase4:37953] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:15,600 INFO [RS:2;jenkins-hbase4:41243] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:15,600 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:15,600 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:15,600 INFO [RS:2;jenkins-hbase4:41243] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:15,600 INFO [RS:3;jenkins-hbase4:46139] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:15,601 INFO [RS:0;jenkins-hbase4:37953] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:15,601 INFO [RS:0;jenkins-hbase4:37953] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:15,601 INFO [RS:3;jenkins-hbase4:46139] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:15,601 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:15,601 INFO [RS:2;jenkins-hbase4:41243] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:15,601 DEBUG [RS:0;jenkins-hbase4:37953] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x233e4e6e to 127.0.0.1:52937 2023-07-18 20:15:15,601 DEBUG [RS:0;jenkins-hbase4:37953] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,601 INFO [RS:3;jenkins-hbase4:46139] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:15,601 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37953,1689711288586; all regions closed. 2023-07-18 20:15:15,601 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:15,601 INFO [RS:1;jenkins-hbase4:43019] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:15,601 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(3305): Received CLOSE for 9bdaa52a9660aa589eccad822b32b8c6 2023-07-18 20:15:15,601 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:15,601 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:15,601 DEBUG [RS:2;jenkins-hbase4:41243] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x53b63469 to 127.0.0.1:52937 2023-07-18 20:15:15,602 DEBUG [RS:2;jenkins-hbase4:41243] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,602 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41243,1689711288943; all regions closed. 2023-07-18 20:15:15,602 INFO [RS:1;jenkins-hbase4:43019] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:15,602 INFO [RS:1;jenkins-hbase4:43019] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:15,602 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(3305): Received CLOSE for f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:15,602 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:15,603 DEBUG [RS:1;jenkins-hbase4:43019] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x753f4dbc to 127.0.0.1:52937 2023-07-18 20:15:15,602 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(3305): Received CLOSE for 22291b453f6d085322d417bcf0fb99d8 2023-07-18 20:15:15,603 DEBUG [RS:1;jenkins-hbase4:43019] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f5222ed7b3e1e7231b47206067febb0d, disabling compactions & flushes 2023-07-18 20:15:15,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bdaa52a9660aa589eccad822b32b8c6, disabling compactions & flushes 2023-07-18 20:15:15,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:15,603 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(3305): Received CLOSE for f1d06ae394b6dc19534084668df26a36 2023-07-18 20:15:15,604 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 20:15:15,604 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:15,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:15,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:15,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. after waiting 0 ms 2023-07-18 20:15:15,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:15,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. after waiting 0 ms 2023-07-18 20:15:15,604 DEBUG [RS:3;jenkins-hbase4:46139] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x74ae934f to 127.0.0.1:52937 2023-07-18 20:15:15,604 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1478): Online Regions={f5222ed7b3e1e7231b47206067febb0d=testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d.} 2023-07-18 20:15:15,605 DEBUG [RS:3;jenkins-hbase4:46139] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:15,605 INFO [RS:3;jenkins-hbase4:46139] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:15,605 INFO [RS:3;jenkins-hbase4:46139] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:15,605 INFO [RS:3;jenkins-hbase4:46139] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:15,605 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 20:15:15,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:15,605 DEBUG [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1504): Waiting on f5222ed7b3e1e7231b47206067febb0d 2023-07-18 20:15:15,611 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,612 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,612 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 20:15:15,612 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1478): Online Regions={9bdaa52a9660aa589eccad822b32b8c6=unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6., 22291b453f6d085322d417bcf0fb99d8=hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8., 1588230740=hbase:meta,,1.1588230740, f1d06ae394b6dc19534084668df26a36=hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36.} 2023-07-18 20:15:15,612 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1504): Waiting on 1588230740, 22291b453f6d085322d417bcf0fb99d8, 9bdaa52a9660aa589eccad822b32b8c6, f1d06ae394b6dc19534084668df26a36 2023-07-18 20:15:15,612 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:15:15,612 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,613 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:15:15,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:15:15,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:15:15,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:15:15,613 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=36.31 KB heapSize=59.22 KB 2023-07-18 20:15:15,613 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/unmovedTable/9bdaa52a9660aa589eccad822b32b8c6/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 20:15:15,627 DEBUG [RS:2;jenkins-hbase4:41243] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs 2023-07-18 20:15:15,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:15,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bdaa52a9660aa589eccad822b32b8c6: 2023-07-18 20:15:15,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689711310400.9bdaa52a9660aa589eccad822b32b8c6. 2023-07-18 20:15:15,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 22291b453f6d085322d417bcf0fb99d8, disabling compactions & flushes 2023-07-18 20:15:15,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:15:15,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:15:15,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. after waiting 0 ms 2023-07-18 20:15:15,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:15:15,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 22291b453f6d085322d417bcf0fb99d8 1/1 column families, dataSize=27.11 KB heapSize=44.64 KB 2023-07-18 20:15:15,627 INFO [RS:2;jenkins-hbase4:41243] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41243%2C1689711288943:(num 1689711290901) 2023-07-18 20:15:15,629 DEBUG [RS:2;jenkins-hbase4:41243] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,629 INFO [RS:2;jenkins-hbase4:41243] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,630 INFO [RS:2;jenkins-hbase4:41243] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:15,631 INFO [RS:2;jenkins-hbase4:41243] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:15,631 INFO [RS:2;jenkins-hbase4:41243] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:15,631 INFO [RS:2;jenkins-hbase4:41243] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:15,632 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:15,632 DEBUG [RS:0;jenkins-hbase4:37953] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs 2023-07-18 20:15:15,633 INFO [RS:2;jenkins-hbase4:41243] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41243 2023-07-18 20:15:15,633 INFO [RS:0;jenkins-hbase4:37953] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37953%2C1689711288586:(num 1689711290903) 2023-07-18 20:15:15,633 DEBUG [RS:0;jenkins-hbase4:37953] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,633 INFO [RS:0;jenkins-hbase4:37953] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/default/testRename/f5222ed7b3e1e7231b47206067febb0d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 20:15:15,633 INFO [RS:0;jenkins-hbase4:37953] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:15,634 INFO [RS:0;jenkins-hbase4:37953] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:15,634 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:15,634 INFO [RS:0;jenkins-hbase4:37953] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:15,634 INFO [RS:0;jenkins-hbase4:37953] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:15,635 INFO [RS:0;jenkins-hbase4:37953] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37953 2023-07-18 20:15:15,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:15,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f5222ed7b3e1e7231b47206067febb0d: 2023-07-18 20:15:15,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689711308739.f5222ed7b3e1e7231b47206067febb0d. 2023-07-18 20:15:15,650 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=33.39 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/info/afd6d6704d3f4dfc827e83b2bdb172bd 2023-07-18 20:15:15,660 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for afd6d6704d3f4dfc827e83b2bdb172bd 2023-07-18 20:15:15,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/rep_barrier/5d08d9e23ec145c4bd824891098f9014 2023-07-18 20:15:15,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5d08d9e23ec145c4bd824891098f9014 2023-07-18 20:15:15,692 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/table/216c8d6b547f493cb52b49e753417f4c 2023-07-18 20:15:15,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 216c8d6b547f493cb52b49e753417f4c 2023-07-18 20:15:15,699 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/info/afd6d6704d3f4dfc827e83b2bdb172bd as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info/afd6d6704d3f4dfc827e83b2bdb172bd 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41243,1689711288943 2023-07-18 20:15:15,707 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:15,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for afd6d6704d3f4dfc827e83b2bdb172bd 2023-07-18 20:15:15,706 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:15,707 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:15,707 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37953,1689711288586 2023-07-18 20:15:15,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/info/afd6d6704d3f4dfc827e83b2bdb172bd, entries=52, sequenceid=210, filesize=10.7 K 2023-07-18 20:15:15,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/rep_barrier/5d08d9e23ec145c4bd824891098f9014 as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier/5d08d9e23ec145c4bd824891098f9014 2023-07-18 20:15:15,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5d08d9e23ec145c4bd824891098f9014 2023-07-18 20:15:15,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/rep_barrier/5d08d9e23ec145c4bd824891098f9014, entries=8, sequenceid=210, filesize=5.8 K 2023-07-18 20:15:15,714 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 20:15:15,714 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 20:15:15,715 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/.tmp/table/216c8d6b547f493cb52b49e753417f4c as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table/216c8d6b547f493cb52b49e753417f4c 2023-07-18 20:15:15,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 216c8d6b547f493cb52b49e753417f4c 2023-07-18 20:15:15,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/table/216c8d6b547f493cb52b49e753417f4c, entries=16, sequenceid=210, filesize=6.0 K 2023-07-18 20:15:15,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~36.31 KB/37186, heapSize ~59.17 KB/60592, currentSize=0 B/0 for 1588230740 in 108ms, sequenceid=210, compaction requested=false 2023-07-18 20:15:15,732 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=101 2023-07-18 20:15:15,733 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:15,733 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:15,734 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:15:15,734 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:15,806 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43019,1689711288774; all regions closed. 2023-07-18 20:15:15,812 DEBUG [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs 2023-07-18 20:15:15,812 INFO [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43019%2C1689711288774.meta:.meta(num 1689711291222) 2023-07-18 20:15:15,812 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1504): Waiting on 22291b453f6d085322d417bcf0fb99d8, f1d06ae394b6dc19534084668df26a36 2023-07-18 20:15:15,815 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37953,1689711288586] 2023-07-18 20:15:15,815 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37953,1689711288586; numProcessing=1 2023-07-18 20:15:15,818 DEBUG [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs 2023-07-18 20:15:15,818 INFO [RS:1;jenkins-hbase4:43019] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43019%2C1689711288774:(num 1689711290901) 2023-07-18 20:15:15,818 DEBUG [RS:1;jenkins-hbase4:43019] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:15,819 INFO [RS:1;jenkins-hbase4:43019] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:15,819 INFO [RS:1;jenkins-hbase4:43019] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:15,819 INFO [RS:1;jenkins-hbase4:43019] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:15,819 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:15,819 INFO [RS:1;jenkins-hbase4:43019] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:15,819 INFO [RS:1;jenkins-hbase4:43019] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:15,820 INFO [RS:1;jenkins-hbase4:43019] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43019 2023-07-18 20:15:15,871 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 20:15:15,871 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 20:15:15,916 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:15,916 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:37953-0x1017a1298a70001, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:15,916 INFO [RS:0;jenkins-hbase4:37953] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37953,1689711288586; zookeeper connection closed. 2023-07-18 20:15:15,916 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@50fa8fe2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@50fa8fe2 2023-07-18 20:15:15,917 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:15,917 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:15,917 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43019,1689711288774 2023-07-18 20:15:15,917 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37953,1689711288586 already deleted, retry=false 2023-07-18 20:15:15,917 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37953,1689711288586 expired; onlineServers=3 2023-07-18 20:15:15,917 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41243,1689711288943] 2023-07-18 20:15:15,917 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41243,1689711288943; numProcessing=2 2023-07-18 20:15:16,013 DEBUG [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1504): Waiting on 22291b453f6d085322d417bcf0fb99d8, f1d06ae394b6dc19534084668df26a36 2023-07-18 20:15:16,016 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,016 INFO [RS:2;jenkins-hbase4:41243] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41243,1689711288943; zookeeper connection closed. 2023-07-18 20:15:16,016 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:41243-0x1017a1298a70003, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,016 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6aeb6699] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6aeb6699 2023-07-18 20:15:16,018 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41243,1689711288943 already deleted, retry=false 2023-07-18 20:15:16,018 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41243,1689711288943 expired; onlineServers=2 2023-07-18 20:15:16,018 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43019,1689711288774] 2023-07-18 20:15:16,019 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43019,1689711288774; numProcessing=3 2023-07-18 20:15:16,021 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43019,1689711288774 already deleted, retry=false 2023-07-18 20:15:16,021 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43019,1689711288774 expired; onlineServers=1 2023-07-18 20:15:16,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.11 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/.tmp/m/795647e5694c4cb297168a2a2b3416db 2023-07-18 20:15:16,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 795647e5694c4cb297168a2a2b3416db 2023-07-18 20:15:16,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/.tmp/m/795647e5694c4cb297168a2a2b3416db as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m/795647e5694c4cb297168a2a2b3416db 2023-07-18 20:15:16,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 795647e5694c4cb297168a2a2b3416db 2023-07-18 20:15:16,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/m/795647e5694c4cb297168a2a2b3416db, entries=28, sequenceid=101, filesize=6.1 K 2023-07-18 20:15:16,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.11 KB/27762, heapSize ~44.63 KB/45696, currentSize=0 B/0 for 22291b453f6d085322d417bcf0fb99d8 in 442ms, sequenceid=101, compaction requested=false 2023-07-18 20:15:16,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/rsgroup/22291b453f6d085322d417bcf0fb99d8/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-18 20:15:16,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:16,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:15:16,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 22291b453f6d085322d417bcf0fb99d8: 2023-07-18 20:15:16,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689711291701.22291b453f6d085322d417bcf0fb99d8. 2023-07-18 20:15:16,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f1d06ae394b6dc19534084668df26a36, disabling compactions & flushes 2023-07-18 20:15:16,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:15:16,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:15:16,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. after waiting 0 ms 2023-07-18 20:15:16,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:15:16,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/data/hbase/namespace/f1d06ae394b6dc19534084668df26a36/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-18 20:15:16,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:15:16,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f1d06ae394b6dc19534084668df26a36: 2023-07-18 20:15:16,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689711291485.f1d06ae394b6dc19534084668df26a36. 2023-07-18 20:15:16,117 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,117 INFO [RS:1;jenkins-hbase4:43019] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43019,1689711288774; zookeeper connection closed. 2023-07-18 20:15:16,118 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:43019-0x1017a1298a70002, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,118 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@768e8a91] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@768e8a91 2023-07-18 20:15:16,213 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46139,1689711292506; all regions closed. 2023-07-18 20:15:16,220 DEBUG [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs 2023-07-18 20:15:16,220 INFO [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46139%2C1689711292506.meta:.meta(num 1689711300477) 2023-07-18 20:15:16,225 DEBUG [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/oldWALs 2023-07-18 20:15:16,225 INFO [RS:3;jenkins-hbase4:46139] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46139%2C1689711292506:(num 1689711293007) 2023-07-18 20:15:16,225 DEBUG [RS:3;jenkins-hbase4:46139] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:16,225 INFO [RS:3;jenkins-hbase4:46139] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:16,226 INFO [RS:3;jenkins-hbase4:46139] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:16,226 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:16,227 INFO [RS:3;jenkins-hbase4:46139] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46139 2023-07-18 20:15:16,229 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46139,1689711292506 2023-07-18 20:15:16,229 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:16,230 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46139,1689711292506] 2023-07-18 20:15:16,230 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46139,1689711292506; numProcessing=4 2023-07-18 20:15:16,231 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46139,1689711292506 already deleted, retry=false 2023-07-18 20:15:16,231 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46139,1689711292506 expired; onlineServers=0 2023-07-18 20:15:16,231 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32929,1689711286630' ***** 2023-07-18 20:15:16,231 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 20:15:16,232 DEBUG [M:0;jenkins-hbase4:32929] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18dd5bd9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:16,232 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:16,235 INFO [M:0;jenkins-hbase4:32929] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c53ffcd{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 20:15:16,235 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:16,235 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:16,235 INFO [M:0;jenkins-hbase4:32929] server.AbstractConnector(383): Stopped ServerConnector@604d87c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:16,235 INFO [M:0;jenkins-hbase4:32929] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:16,236 INFO [M:0;jenkins-hbase4:32929] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@56d79499{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:16,236 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:16,236 INFO [M:0;jenkins-hbase4:32929] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34048a3a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:16,237 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32929,1689711286630 2023-07-18 20:15:16,237 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32929,1689711286630; all regions closed. 2023-07-18 20:15:16,237 DEBUG [M:0;jenkins-hbase4:32929] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:16,237 INFO [M:0;jenkins-hbase4:32929] master.HMaster(1491): Stopping master jetty server 2023-07-18 20:15:16,238 INFO [M:0;jenkins-hbase4:32929] server.AbstractConnector(383): Stopped ServerConnector@6965f1fe{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:16,238 DEBUG [M:0;jenkins-hbase4:32929] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 20:15:16,238 DEBUG [M:0;jenkins-hbase4:32929] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 20:15:16,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711290505] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711290505,5,FailOnTimeoutGroup] 2023-07-18 20:15:16,238 INFO [M:0;jenkins-hbase4:32929] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 20:15:16,238 INFO [M:0;jenkins-hbase4:32929] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 20:15:16,238 INFO [M:0;jenkins-hbase4:32929] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 20:15:16,239 DEBUG [M:0;jenkins-hbase4:32929] master.HMaster(1512): Stopping service threads 2023-07-18 20:15:16,239 INFO [M:0;jenkins-hbase4:32929] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 20:15:16,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711290505] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711290505,5,FailOnTimeoutGroup] 2023-07-18 20:15:16,241 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 20:15:16,241 ERROR [M:0;jenkins-hbase4:32929] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-18 20:15:16,241 INFO [M:0;jenkins-hbase4:32929] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 20:15:16,242 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 20:15:16,242 DEBUG [M:0;jenkins-hbase4:32929] zookeeper.ZKUtil(398): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 20:15:16,242 WARN [M:0;jenkins-hbase4:32929] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 20:15:16,242 INFO [M:0;jenkins-hbase4:32929] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 20:15:16,242 INFO [M:0;jenkins-hbase4:32929] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 20:15:16,242 DEBUG [M:0;jenkins-hbase4:32929] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 20:15:16,243 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:16,243 DEBUG [M:0;jenkins-hbase4:32929] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:16,243 DEBUG [M:0;jenkins-hbase4:32929] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 20:15:16,243 DEBUG [M:0;jenkins-hbase4:32929] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:16,243 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.22 KB heapSize=621.35 KB 2023-07-18 20:15:16,257 INFO [M:0;jenkins-hbase4:32929] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.22 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/37ab7dfde2a04efbab861f42e0d8082e 2023-07-18 20:15:16,262 DEBUG [M:0;jenkins-hbase4:32929] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/37ab7dfde2a04efbab861f42e0d8082e as hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/37ab7dfde2a04efbab861f42e0d8082e 2023-07-18 20:15:16,267 INFO [M:0;jenkins-hbase4:32929] regionserver.HStore(1080): Added hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/37ab7dfde2a04efbab861f42e0d8082e, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-18 20:15:16,268 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegion(2948): Finished flush of dataSize ~519.22 KB/531680, heapSize ~621.34 KB/636248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=1152, compaction requested=false 2023-07-18 20:15:16,269 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:16,270 DEBUG [M:0;jenkins-hbase4:32929] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:16,273 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:16,273 INFO [M:0;jenkins-hbase4:32929] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 20:15:16,274 INFO [M:0;jenkins-hbase4:32929] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32929 2023-07-18 20:15:16,275 DEBUG [M:0;jenkins-hbase4:32929] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,32929,1689711286630 already deleted, retry=false 2023-07-18 20:15:16,330 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,330 INFO [RS:3;jenkins-hbase4:46139] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46139,1689711292506; zookeeper connection closed. 2023-07-18 20:15:16,330 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): regionserver:46139-0x1017a1298a7000b, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,331 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1e50410e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1e50410e 2023-07-18 20:15:16,331 INFO [Listener at localhost/39395] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 20:15:16,430 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,431 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): master:32929-0x1017a1298a70000, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:16,430 INFO [M:0;jenkins-hbase4:32929] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32929,1689711286630; zookeeper connection closed. 2023-07-18 20:15:16,432 WARN [Listener at localhost/39395] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:16,436 INFO [Listener at localhost/39395] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:16,540 WARN [BP-1976443503-172.31.14.131-1689711282686 heartbeating to localhost/127.0.0.1:37087] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:16,540 WARN [BP-1976443503-172.31.14.131-1689711282686 heartbeating to localhost/127.0.0.1:37087] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1976443503-172.31.14.131-1689711282686 (Datanode Uuid f4c230e2-7837-4991-a6ab-6c6804d76b91) service to localhost/127.0.0.1:37087 2023-07-18 20:15:16,544 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data5/current/BP-1976443503-172.31.14.131-1689711282686] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:16,544 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data6/current/BP-1976443503-172.31.14.131-1689711282686] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:16,546 WARN [Listener at localhost/39395] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:16,549 INFO [Listener at localhost/39395] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:16,653 WARN [BP-1976443503-172.31.14.131-1689711282686 heartbeating to localhost/127.0.0.1:37087] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:16,654 WARN [BP-1976443503-172.31.14.131-1689711282686 heartbeating to localhost/127.0.0.1:37087] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1976443503-172.31.14.131-1689711282686 (Datanode Uuid 617e6469-29c1-46fc-94da-cf04d48b2089) service to localhost/127.0.0.1:37087 2023-07-18 20:15:16,654 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data3/current/BP-1976443503-172.31.14.131-1689711282686] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:16,655 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data4/current/BP-1976443503-172.31.14.131-1689711282686] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:16,656 WARN [Listener at localhost/39395] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:16,661 INFO [Listener at localhost/39395] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:16,766 WARN [BP-1976443503-172.31.14.131-1689711282686 heartbeating to localhost/127.0.0.1:37087] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:16,766 WARN [BP-1976443503-172.31.14.131-1689711282686 heartbeating to localhost/127.0.0.1:37087] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1976443503-172.31.14.131-1689711282686 (Datanode Uuid 3dfbad27-7337-496c-97d4-b22e27464a76) service to localhost/127.0.0.1:37087 2023-07-18 20:15:16,767 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data1/current/BP-1976443503-172.31.14.131-1689711282686] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:16,767 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/cluster_4f743d42-486b-31c0-fe50-39be4fc40d23/dfs/data/data2/current/BP-1976443503-172.31.14.131-1689711282686] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:16,805 INFO [Listener at localhost/39395] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:16,825 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:16,826 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 20:15:16,826 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 20:15:16,925 INFO [Listener at localhost/39395] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.log.dir so I do NOT create it in target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9634641-f4e2-6ed8-1730-9aee435b8d99/hadoop.tmp.dir so I do NOT create it in target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930, deleteOnExit=true 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 20:15:16,985 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/test.cache.data in system properties and HBase conf 2023-07-18 20:15:16,986 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 20:15:16,986 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir in system properties and HBase conf 2023-07-18 20:15:16,986 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 20:15:16,986 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 20:15:16,986 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 20:15:16,986 DEBUG [Listener at localhost/39395] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 20:15:16,986 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/nfs.dump.dir in system properties and HBase conf 2023-07-18 20:15:16,987 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir in system properties and HBase conf 2023-07-18 20:15:16,988 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 20:15:16,988 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 20:15:16,988 INFO [Listener at localhost/39395] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 20:15:16,992 WARN [Listener at localhost/39395] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 20:15:16,992 WARN [Listener at localhost/39395] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 20:15:17,021 DEBUG [Listener at localhost/39395-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017a1298a7000a, quorum=127.0.0.1:52937, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 20:15:17,021 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017a1298a7000a, quorum=127.0.0.1:52937, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 20:15:17,046 WARN [Listener at localhost/39395] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:17,049 INFO [Listener at localhost/39395] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:17,057 INFO [Listener at localhost/39395] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/Jetty_localhost_39217_hdfs____9s5jdo/webapp 2023-07-18 20:15:17,172 INFO [Listener at localhost/39395] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39217 2023-07-18 20:15:17,182 WARN [Listener at localhost/39395] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 20:15:17,183 WARN [Listener at localhost/39395] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 20:15:17,289 WARN [Listener at localhost/44781] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:17,317 WARN [Listener at localhost/44781] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 20:15:17,372 WARN [Listener at localhost/44781] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:15:17,375 WARN [Listener at localhost/44781] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:17,377 INFO [Listener at localhost/44781] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:17,382 INFO [Listener at localhost/44781] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/Jetty_localhost_44425_datanode____.wx3vs2/webapp 2023-07-18 20:15:17,486 INFO [Listener at localhost/44781] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44425 2023-07-18 20:15:17,493 WARN [Listener at localhost/46331] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:17,519 WARN [Listener at localhost/46331] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:15:17,521 WARN [Listener at localhost/46331] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:17,523 INFO [Listener at localhost/46331] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:17,528 INFO [Listener at localhost/46331] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/Jetty_localhost_37429_datanode____.t0c35u/webapp 2023-07-18 20:15:17,628 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2d3c6ecfd723d21: Processing first storage report for DS-6e0821fb-1fab-489b-91a6-28e642a955a3 from datanode d222239b-3c58-466c-874a-0802d7d6daf2 2023-07-18 20:15:17,628 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2d3c6ecfd723d21: from storage DS-6e0821fb-1fab-489b-91a6-28e642a955a3 node DatanodeRegistration(127.0.0.1:33871, datanodeUuid=d222239b-3c58-466c-874a-0802d7d6daf2, infoPort=36267, infoSecurePort=0, ipcPort=46331, storageInfo=lv=-57;cid=testClusterID;nsid=796468613;c=1689711316996), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 20:15:17,628 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2d3c6ecfd723d21: Processing first storage report for DS-1b4b4c69-e02a-4675-918e-0af218839035 from datanode d222239b-3c58-466c-874a-0802d7d6daf2 2023-07-18 20:15:17,628 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2d3c6ecfd723d21: from storage DS-1b4b4c69-e02a-4675-918e-0af218839035 node DatanodeRegistration(127.0.0.1:33871, datanodeUuid=d222239b-3c58-466c-874a-0802d7d6daf2, infoPort=36267, infoSecurePort=0, ipcPort=46331, storageInfo=lv=-57;cid=testClusterID;nsid=796468613;c=1689711316996), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:17,637 INFO [Listener at localhost/46331] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37429 2023-07-18 20:15:17,646 WARN [Listener at localhost/42769] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:17,667 WARN [Listener at localhost/42769] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:15:17,670 WARN [Listener at localhost/42769] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:17,671 INFO [Listener at localhost/42769] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:17,675 INFO [Listener at localhost/42769] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/Jetty_localhost_41369_datanode____.ecef40/webapp 2023-07-18 20:15:17,777 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x535ad6ff952a67f3: Processing first storage report for DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e from datanode aa71a145-df57-4013-b039-6c2f2f91adc8 2023-07-18 20:15:17,778 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x535ad6ff952a67f3: from storage DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e node DatanodeRegistration(127.0.0.1:40187, datanodeUuid=aa71a145-df57-4013-b039-6c2f2f91adc8, infoPort=36833, infoSecurePort=0, ipcPort=42769, storageInfo=lv=-57;cid=testClusterID;nsid=796468613;c=1689711316996), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 20:15:17,778 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x535ad6ff952a67f3: Processing first storage report for DS-e390c96d-9bbf-4048-a593-de7da9b03680 from datanode aa71a145-df57-4013-b039-6c2f2f91adc8 2023-07-18 20:15:17,778 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x535ad6ff952a67f3: from storage DS-e390c96d-9bbf-4048-a593-de7da9b03680 node DatanodeRegistration(127.0.0.1:40187, datanodeUuid=aa71a145-df57-4013-b039-6c2f2f91adc8, infoPort=36833, infoSecurePort=0, ipcPort=42769, storageInfo=lv=-57;cid=testClusterID;nsid=796468613;c=1689711316996), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:17,791 INFO [Listener at localhost/42769] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41369 2023-07-18 20:15:17,802 WARN [Listener at localhost/37791] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:17,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdaff7bdc208843b2: Processing first storage report for DS-9fda3b67-04de-45e9-9a1a-510cb40b5905 from datanode 211f74fc-3c9b-4057-bda6-d849ca9b77d7 2023-07-18 20:15:17,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdaff7bdc208843b2: from storage DS-9fda3b67-04de-45e9-9a1a-510cb40b5905 node DatanodeRegistration(127.0.0.1:33579, datanodeUuid=211f74fc-3c9b-4057-bda6-d849ca9b77d7, infoPort=33709, infoSecurePort=0, ipcPort=37791, storageInfo=lv=-57;cid=testClusterID;nsid=796468613;c=1689711316996), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:17,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdaff7bdc208843b2: Processing first storage report for DS-79ac8e49-54e8-4d0c-9c50-29e4db4e770e from datanode 211f74fc-3c9b-4057-bda6-d849ca9b77d7 2023-07-18 20:15:17,907 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdaff7bdc208843b2: from storage DS-79ac8e49-54e8-4d0c-9c50-29e4db4e770e node DatanodeRegistration(127.0.0.1:33579, datanodeUuid=211f74fc-3c9b-4057-bda6-d849ca9b77d7, infoPort=33709, infoSecurePort=0, ipcPort=37791, storageInfo=lv=-57;cid=testClusterID;nsid=796468613;c=1689711316996), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:17,915 DEBUG [Listener at localhost/37791] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57 2023-07-18 20:15:17,919 INFO [Listener at localhost/37791] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/zookeeper_0, clientPort=61189, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 20:15:17,920 INFO [Listener at localhost/37791] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61189 2023-07-18 20:15:17,921 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:17,921 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:17,941 INFO [Listener at localhost/37791] util.FSUtils(471): Created version file at hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4 with version=8 2023-07-18 20:15:17,941 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/hbase-staging 2023-07-18 20:15:17,942 DEBUG [Listener at localhost/37791] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 20:15:17,942 DEBUG [Listener at localhost/37791] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 20:15:17,942 DEBUG [Listener at localhost/37791] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 20:15:17,942 DEBUG [Listener at localhost/37791] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 20:15:17,944 INFO [Listener at localhost/37791] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:17,944 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:17,944 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:17,944 INFO [Listener at localhost/37791] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:17,944 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:17,944 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:17,945 INFO [Listener at localhost/37791] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:17,945 INFO [Listener at localhost/37791] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41401 2023-07-18 20:15:17,946 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:17,948 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:17,949 INFO [Listener at localhost/37791] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41401 connecting to ZooKeeper ensemble=127.0.0.1:61189 2023-07-18 20:15:17,956 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:414010x0, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:17,957 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41401-0x1017a1316a00000 connected 2023-07-18 20:15:17,973 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:17,974 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:17,974 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:17,977 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41401 2023-07-18 20:15:17,978 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41401 2023-07-18 20:15:17,982 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41401 2023-07-18 20:15:17,982 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41401 2023-07-18 20:15:17,982 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41401 2023-07-18 20:15:17,984 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:17,985 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:17,985 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:17,985 INFO [Listener at localhost/37791] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 20:15:17,985 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:17,985 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:17,985 INFO [Listener at localhost/37791] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:17,986 INFO [Listener at localhost/37791] http.HttpServer(1146): Jetty bound to port 46311 2023-07-18 20:15:17,986 INFO [Listener at localhost/37791] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:17,993 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:17,994 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39bac69f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:17,994 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:17,994 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ffe8847{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:18,115 INFO [Listener at localhost/37791] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:18,116 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:18,116 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:18,116 INFO [Listener at localhost/37791] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:15:18,117 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,118 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@37c3b5a4{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/jetty-0_0_0_0-46311-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7537068974706516136/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 20:15:18,120 INFO [Listener at localhost/37791] server.AbstractConnector(333): Started ServerConnector@1226f160{HTTP/1.1, (http/1.1)}{0.0.0.0:46311} 2023-07-18 20:15:18,120 INFO [Listener at localhost/37791] server.Server(415): Started @37409ms 2023-07-18 20:15:18,120 INFO [Listener at localhost/37791] master.HMaster(444): hbase.rootdir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4, hbase.cluster.distributed=false 2023-07-18 20:15:18,133 INFO [Listener at localhost/37791] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:18,134 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,134 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,134 INFO [Listener at localhost/37791] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:18,134 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,134 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:18,134 INFO [Listener at localhost/37791] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:18,135 INFO [Listener at localhost/37791] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40503 2023-07-18 20:15:18,135 INFO [Listener at localhost/37791] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:18,136 DEBUG [Listener at localhost/37791] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:18,136 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,137 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,138 INFO [Listener at localhost/37791] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40503 connecting to ZooKeeper ensemble=127.0.0.1:61189 2023-07-18 20:15:18,142 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:405030x0, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:18,144 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40503-0x1017a1316a00001 connected 2023-07-18 20:15:18,144 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:18,144 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:18,145 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:18,146 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40503 2023-07-18 20:15:18,147 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40503 2023-07-18 20:15:18,147 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40503 2023-07-18 20:15:18,150 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40503 2023-07-18 20:15:18,150 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40503 2023-07-18 20:15:18,152 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:18,152 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:18,152 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:18,153 INFO [Listener at localhost/37791] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:18,153 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:18,153 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:18,153 INFO [Listener at localhost/37791] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:18,154 INFO [Listener at localhost/37791] http.HttpServer(1146): Jetty bound to port 34807 2023-07-18 20:15:18,154 INFO [Listener at localhost/37791] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:18,157 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,158 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6acb7487{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:18,158 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,158 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@22cdd0c4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:18,283 INFO [Listener at localhost/37791] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:18,284 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:18,284 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:18,284 INFO [Listener at localhost/37791] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:15:18,285 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,285 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@ec73c16{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/jetty-0_0_0_0-34807-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4138540561028604888/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:18,287 INFO [Listener at localhost/37791] server.AbstractConnector(333): Started ServerConnector@2e719b7b{HTTP/1.1, (http/1.1)}{0.0.0.0:34807} 2023-07-18 20:15:18,287 INFO [Listener at localhost/37791] server.Server(415): Started @37577ms 2023-07-18 20:15:18,298 INFO [Listener at localhost/37791] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:18,298 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,299 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,299 INFO [Listener at localhost/37791] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:18,299 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,299 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:18,299 INFO [Listener at localhost/37791] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:18,300 INFO [Listener at localhost/37791] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44825 2023-07-18 20:15:18,300 INFO [Listener at localhost/37791] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:18,301 DEBUG [Listener at localhost/37791] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:18,302 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,303 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,305 INFO [Listener at localhost/37791] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44825 connecting to ZooKeeper ensemble=127.0.0.1:61189 2023-07-18 20:15:18,309 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:448250x0, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:18,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44825-0x1017a1316a00002 connected 2023-07-18 20:15:18,310 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:18,311 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:18,311 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:18,314 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44825 2023-07-18 20:15:18,315 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44825 2023-07-18 20:15:18,319 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44825 2023-07-18 20:15:18,319 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44825 2023-07-18 20:15:18,319 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44825 2023-07-18 20:15:18,321 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:18,321 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:18,321 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:18,321 INFO [Listener at localhost/37791] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:18,321 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:18,322 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:18,322 INFO [Listener at localhost/37791] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:18,322 INFO [Listener at localhost/37791] http.HttpServer(1146): Jetty bound to port 40331 2023-07-18 20:15:18,323 INFO [Listener at localhost/37791] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:18,325 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,326 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70c264{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:18,326 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,326 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7754bef0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:18,439 INFO [Listener at localhost/37791] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:18,441 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:18,441 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:18,441 INFO [Listener at localhost/37791] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 20:15:18,442 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,443 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5033e85c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/jetty-0_0_0_0-40331-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2875503363222686372/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:18,445 INFO [Listener at localhost/37791] server.AbstractConnector(333): Started ServerConnector@5766466b{HTTP/1.1, (http/1.1)}{0.0.0.0:40331} 2023-07-18 20:15:18,445 INFO [Listener at localhost/37791] server.Server(415): Started @37735ms 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:18,463 INFO [Listener at localhost/37791] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:18,464 INFO [Listener at localhost/37791] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36795 2023-07-18 20:15:18,465 INFO [Listener at localhost/37791] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:18,468 DEBUG [Listener at localhost/37791] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:18,468 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,471 INFO [Listener at localhost/37791] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,473 INFO [Listener at localhost/37791] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36795 connecting to ZooKeeper ensemble=127.0.0.1:61189 2023-07-18 20:15:18,477 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:367950x0, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:18,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36795-0x1017a1316a00003 connected 2023-07-18 20:15:18,479 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:18,480 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:18,481 DEBUG [Listener at localhost/37791] zookeeper.ZKUtil(164): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:18,481 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36795 2023-07-18 20:15:18,481 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36795 2023-07-18 20:15:18,482 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36795 2023-07-18 20:15:18,488 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36795 2023-07-18 20:15:18,488 DEBUG [Listener at localhost/37791] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36795 2023-07-18 20:15:18,489 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:18,490 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:18,490 INFO [Listener at localhost/37791] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:18,490 INFO [Listener at localhost/37791] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:18,490 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:18,490 INFO [Listener at localhost/37791] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:18,490 INFO [Listener at localhost/37791] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:18,491 INFO [Listener at localhost/37791] http.HttpServer(1146): Jetty bound to port 40653 2023-07-18 20:15:18,491 INFO [Listener at localhost/37791] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:18,495 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,495 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ee15bbe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:18,496 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,496 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@37c4b42f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:18,636 INFO [Listener at localhost/37791] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:18,637 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:18,637 INFO [Listener at localhost/37791] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:18,638 INFO [Listener at localhost/37791] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 20:15:18,639 INFO [Listener at localhost/37791] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:18,640 INFO [Listener at localhost/37791] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7bcb3b7d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/java.io.tmpdir/jetty-0_0_0_0-40653-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5127322317165332948/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:18,642 INFO [Listener at localhost/37791] server.AbstractConnector(333): Started ServerConnector@17e2c0dc{HTTP/1.1, (http/1.1)}{0.0.0.0:40653} 2023-07-18 20:15:18,642 INFO [Listener at localhost/37791] server.Server(415): Started @37932ms 2023-07-18 20:15:18,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:18,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@248326c2{HTTP/1.1, (http/1.1)}{0.0.0.0:39589} 2023-07-18 20:15:18,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37939ms 2023-07-18 20:15:18,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,651 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 20:15:18,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,654 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:18,654 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:18,654 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:18,654 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:18,654 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:18,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:15:18,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41401,1689711317943 from backup master directory 2023-07-18 20:15:18,658 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:15:18,659 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,659 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:18,659 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 20:15:18,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/hbase.id with ID: 8304269d-e61d-4024-ac00-77ee3deb2143 2023-07-18 20:15:18,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:18,699 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:18,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6721f9f6 to 127.0.0.1:61189 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:18,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@707231a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:18,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:18,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 20:15:18,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:18,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store-tmp 2023-07-18 20:15:18,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:18,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 20:15:18,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:18,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:18,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 20:15:18,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:18,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:18,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:18,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/WALs/jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,737 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41401%2C1689711317943, suffix=, logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/WALs/jenkins-hbase4.apache.org,41401,1689711317943, archiveDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/oldWALs, maxLogs=10 2023-07-18 20:15:18,760 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK] 2023-07-18 20:15:18,760 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK] 2023-07-18 20:15:18,767 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK] 2023-07-18 20:15:18,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/WALs/jenkins-hbase4.apache.org,41401,1689711317943/jenkins-hbase4.apache.org%2C41401%2C1689711317943.1689711318738 2023-07-18 20:15:18,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK], DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK], DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK]] 2023-07-18 20:15:18,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:18,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:18,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:18,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:18,780 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:18,782 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 20:15:18,783 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 20:15:18,788 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:18,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:18,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:18,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:18,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:18,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9930244320, jitterRate=-0.07517392933368683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:18,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:18,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 20:15:18,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 20:15:18,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 20:15:18,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 20:15:18,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 20:15:18,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 20:15:18,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 20:15:18,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 20:15:18,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 20:15:18,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 20:15:18,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 20:15:18,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 20:15:18,816 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:18,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 20:15:18,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 20:15:18,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 20:15:18,820 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:18,820 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:18,820 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:18,820 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:18,820 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:18,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41401,1689711317943, sessionid=0x1017a1316a00000, setting cluster-up flag (Was=false) 2023-07-18 20:15:18,827 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:18,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 20:15:18,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,839 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:18,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 20:15:18,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:18,845 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.hbase-snapshot/.tmp 2023-07-18 20:15:18,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 20:15:18,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 20:15:18,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 20:15:18,850 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:18,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 20:15:18,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-18 20:15:18,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 20:15:18,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 20:15:18,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 20:15:18,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 20:15:18,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:18,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689711348874 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 20:15:18,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:18,875 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 20:15:18,876 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 20:15:18,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 20:15:18,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 20:15:18,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 20:15:18,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 20:15:18,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 20:15:18,878 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:18,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711318877,5,FailOnTimeoutGroup] 2023-07-18 20:15:18,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711318883,5,FailOnTimeoutGroup] 2023-07-18 20:15:18,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:18,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 20:15:18,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:18,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:18,903 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:18,904 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:18,904 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4 2023-07-18 20:15:18,927 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:18,931 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:15:18,932 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/info 2023-07-18 20:15:18,933 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:15:18,934 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:18,934 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:15:18,935 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:18,936 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:15:18,936 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:18,936 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:15:18,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/table 2023-07-18 20:15:18,938 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:15:18,939 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:18,939 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740 2023-07-18 20:15:18,940 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740 2023-07-18 20:15:18,943 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:15:18,945 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:15:18,945 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(951): ClusterId : 8304269d-e61d-4024-ac00-77ee3deb2143 2023-07-18 20:15:18,947 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(951): ClusterId : 8304269d-e61d-4024-ac00-77ee3deb2143 2023-07-18 20:15:18,952 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:18,952 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:18,953 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:18,953 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10941768800, jitterRate=0.019031628966331482}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:15:18,953 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:15:18,953 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:15:18,953 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:15:18,953 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:15:18,953 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:15:18,953 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:15:18,954 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:18,954 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:15:18,955 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(951): ClusterId : 8304269d-e61d-4024-ac00-77ee3deb2143 2023-07-18 20:15:18,956 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 20:15:18,956 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 20:15:18,956 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:18,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 20:15:18,957 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:18,957 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:18,957 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:18,957 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:18,958 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 20:15:18,959 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 20:15:18,959 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:18,960 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ReadOnlyZKClient(139): Connect 0x1250e6ba to 127.0.0.1:61189 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:18,961 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:18,968 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:18,968 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:18,969 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:18,970 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ReadOnlyZKClient(139): Connect 0x4f21aee3 to 127.0.0.1:61189 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:18,975 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ReadOnlyZKClient(139): Connect 0x56fc83ca to 127.0.0.1:61189 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:18,990 DEBUG [RS:1;jenkins-hbase4:44825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c970a80, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:18,990 DEBUG [RS:1;jenkins-hbase4:44825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18a88dca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:18,992 DEBUG [RS:2;jenkins-hbase4:36795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c04e8ba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:18,992 DEBUG [RS:0;jenkins-hbase4:40503] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45d9c593, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:18,992 DEBUG [RS:2;jenkins-hbase4:36795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19066d5e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:18,992 DEBUG [RS:0;jenkins-hbase4:40503] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@db434c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:19,003 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44825 2023-07-18 20:15:19,003 INFO [RS:1;jenkins-hbase4:44825] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:19,003 INFO [RS:1;jenkins-hbase4:44825] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:19,003 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:19,004 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40503 2023-07-18 20:15:19,004 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41401,1689711317943 with isa=jenkins-hbase4.apache.org/172.31.14.131:44825, startcode=1689711318298 2023-07-18 20:15:19,004 INFO [RS:0;jenkins-hbase4:40503] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:19,004 INFO [RS:0;jenkins-hbase4:40503] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:19,004 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:19,004 DEBUG [RS:1;jenkins-hbase4:44825] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:19,004 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36795 2023-07-18 20:15:19,005 INFO [RS:2;jenkins-hbase4:36795] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:19,005 INFO [RS:2;jenkins-hbase4:36795] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:19,005 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:19,005 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41401,1689711317943 with isa=jenkins-hbase4.apache.org/172.31.14.131:36795, startcode=1689711318462 2023-07-18 20:15:19,005 DEBUG [RS:2;jenkins-hbase4:36795] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:19,006 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41401,1689711317943 with isa=jenkins-hbase4.apache.org/172.31.14.131:40503, startcode=1689711318133 2023-07-18 20:15:19,006 DEBUG [RS:0;jenkins-hbase4:40503] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:19,011 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58437, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:19,012 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:19,012 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51719, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:19,013 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41401] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,014 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:19,014 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 20:15:19,014 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41401] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,015 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41401] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,015 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4 2023-07-18 20:15:19,015 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:19,015 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44781 2023-07-18 20:15:19,015 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46311 2023-07-18 20:15:19,015 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 20:15:19,015 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4 2023-07-18 20:15:19,015 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44781 2023-07-18 20:15:19,015 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46311 2023-07-18 20:15:19,017 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:19,019 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4 2023-07-18 20:15:19,019 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44781 2023-07-18 20:15:19,019 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46311 2023-07-18 20:15:19,022 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ZKUtil(162): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,022 WARN [RS:2;jenkins-hbase4:36795] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:19,022 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36795,1689711318462] 2023-07-18 20:15:19,022 INFO [RS:2;jenkins-hbase4:36795] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:19,022 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40503,1689711318133] 2023-07-18 20:15:19,023 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,023 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:19,023 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ZKUtil(162): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,023 WARN [RS:0;jenkins-hbase4:40503] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:19,023 INFO [RS:0;jenkins-hbase4:40503] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:19,023 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,024 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ZKUtil(162): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,024 WARN [RS:1;jenkins-hbase4:44825] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:19,024 INFO [RS:1;jenkins-hbase4:44825] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:19,024 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,024 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44825,1689711318298] 2023-07-18 20:15:19,040 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ZKUtil(162): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,041 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ZKUtil(162): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,041 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ZKUtil(162): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,041 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ZKUtil(162): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,041 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ZKUtil(162): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,041 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ZKUtil(162): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,041 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ZKUtil(162): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,042 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:19,042 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ZKUtil(162): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,043 INFO [RS:0;jenkins-hbase4:40503] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:19,043 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ZKUtil(162): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,044 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:19,044 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:19,044 INFO [RS:1;jenkins-hbase4:44825] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:19,044 INFO [RS:2;jenkins-hbase4:36795] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:19,048 INFO [RS:0;jenkins-hbase4:40503] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:19,051 INFO [RS:2;jenkins-hbase4:36795] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:19,055 INFO [RS:1;jenkins-hbase4:44825] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:19,059 INFO [RS:0;jenkins-hbase4:40503] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:19,059 INFO [RS:2;jenkins-hbase4:36795] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:19,059 INFO [RS:1;jenkins-hbase4:44825] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:19,059 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,059 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,059 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,059 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:19,063 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:19,066 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:19,066 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,067 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,067 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,067 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,068 DEBUG [RS:2;jenkins-hbase4:36795] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,075 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,084 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,084 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,084 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,084 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,087 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,087 DEBUG [RS:0;jenkins-hbase4:40503] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,091 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,095 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,095 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:19,096 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,096 DEBUG [RS:1;jenkins-hbase4:44825] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:19,096 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,105 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,105 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,105 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,105 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,108 INFO [RS:2;jenkins-hbase4:36795] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:19,108 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36795,1689711318462-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,109 DEBUG [jenkins-hbase4:41401] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 20:15:19,110 DEBUG [jenkins-hbase4:41401] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:19,110 DEBUG [jenkins-hbase4:41401] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:19,110 DEBUG [jenkins-hbase4:41401] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:19,110 DEBUG [jenkins-hbase4:41401] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:19,110 DEBUG [jenkins-hbase4:41401] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:19,112 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44825,1689711318298, state=OPENING 2023-07-18 20:15:19,114 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 20:15:19,115 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:19,115 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:15:19,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44825,1689711318298}] 2023-07-18 20:15:19,123 INFO [RS:0;jenkins-hbase4:40503] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:19,123 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40503,1689711318133-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,127 INFO [RS:1;jenkins-hbase4:44825] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:19,127 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44825,1689711318298-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,142 INFO [RS:2;jenkins-hbase4:36795] regionserver.Replication(203): jenkins-hbase4.apache.org,36795,1689711318462 started 2023-07-18 20:15:19,142 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36795,1689711318462, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36795, sessionid=0x1017a1316a00003 2023-07-18 20:15:19,142 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:19,143 DEBUG [RS:2;jenkins-hbase4:36795] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,143 DEBUG [RS:2;jenkins-hbase4:36795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36795,1689711318462' 2023-07-18 20:15:19,143 DEBUG [RS:2;jenkins-hbase4:36795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:19,144 DEBUG [RS:2;jenkins-hbase4:36795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:19,144 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:19,144 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:19,144 DEBUG [RS:2;jenkins-hbase4:36795] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,144 DEBUG [RS:2;jenkins-hbase4:36795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36795,1689711318462' 2023-07-18 20:15:19,144 DEBUG [RS:2;jenkins-hbase4:36795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:19,145 DEBUG [RS:2;jenkins-hbase4:36795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:19,145 DEBUG [RS:2;jenkins-hbase4:36795] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:19,146 INFO [RS:2;jenkins-hbase4:36795] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 20:15:19,149 INFO [RS:1;jenkins-hbase4:44825] regionserver.Replication(203): jenkins-hbase4.apache.org,44825,1689711318298 started 2023-07-18 20:15:19,149 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44825,1689711318298, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44825, sessionid=0x1017a1316a00002 2023-07-18 20:15:19,149 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:19,149 DEBUG [RS:1;jenkins-hbase4:44825] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,149 DEBUG [RS:1;jenkins-hbase4:44825] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44825,1689711318298' 2023-07-18 20:15:19,149 DEBUG [RS:1;jenkins-hbase4:44825] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:19,149 DEBUG [RS:1;jenkins-hbase4:44825] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:19,150 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,150 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:19,150 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:19,150 DEBUG [RS:1;jenkins-hbase4:44825] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,150 DEBUG [RS:1;jenkins-hbase4:44825] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44825,1689711318298' 2023-07-18 20:15:19,150 DEBUG [RS:1;jenkins-hbase4:44825] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:19,150 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ZKUtil(398): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 20:15:19,150 INFO [RS:2;jenkins-hbase4:36795] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 20:15:19,151 DEBUG [RS:1;jenkins-hbase4:44825] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:19,151 DEBUG [RS:1;jenkins-hbase4:44825] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:19,151 INFO [RS:1;jenkins-hbase4:44825] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 20:15:19,151 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,151 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,151 INFO [RS:0;jenkins-hbase4:40503] regionserver.Replication(203): jenkins-hbase4.apache.org,40503,1689711318133 started 2023-07-18 20:15:19,152 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40503,1689711318133, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40503, sessionid=0x1017a1316a00001 2023-07-18 20:15:19,152 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:19,152 DEBUG [RS:0;jenkins-hbase4:40503] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,152 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ZKUtil(398): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 20:15:19,152 INFO [RS:1;jenkins-hbase4:44825] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 20:15:19,152 DEBUG [RS:0;jenkins-hbase4:40503] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40503,1689711318133' 2023-07-18 20:15:19,152 DEBUG [RS:0;jenkins-hbase4:40503] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:19,152 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,152 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,152 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,152 DEBUG [RS:0;jenkins-hbase4:40503] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:19,153 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:19,153 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:19,153 DEBUG [RS:0;jenkins-hbase4:40503] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:19,153 DEBUG [RS:0;jenkins-hbase4:40503] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40503,1689711318133' 2023-07-18 20:15:19,153 DEBUG [RS:0;jenkins-hbase4:40503] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:19,154 DEBUG [RS:0;jenkins-hbase4:40503] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:19,154 DEBUG [RS:0;jenkins-hbase4:40503] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:19,154 INFO [RS:0;jenkins-hbase4:40503] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 20:15:19,154 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,155 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ZKUtil(398): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 20:15:19,155 INFO [RS:0;jenkins-hbase4:40503] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 20:15:19,155 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,155 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,158 WARN [ReadOnlyZKClient-127.0.0.1:61189@0x6721f9f6] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 20:15:19,158 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:19,160 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46156, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:19,161 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44825] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:46156 deadline: 1689711379160, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,257 INFO [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44825%2C1689711318298, suffix=, logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,44825,1689711318298, archiveDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs, maxLogs=32 2023-07-18 20:15:19,257 INFO [RS:2;jenkins-hbase4:36795] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36795%2C1689711318462, suffix=, logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,36795,1689711318462, archiveDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs, maxLogs=32 2023-07-18 20:15:19,257 INFO [RS:0;jenkins-hbase4:40503] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40503%2C1689711318133, suffix=, logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,40503,1689711318133, archiveDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs, maxLogs=32 2023-07-18 20:15:19,276 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK] 2023-07-18 20:15:19,276 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK] 2023-07-18 20:15:19,276 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:19,284 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:15:19,284 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK] 2023-07-18 20:15:19,286 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46164, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:15:19,291 INFO [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,44825,1689711318298/jenkins-hbase4.apache.org%2C44825%2C1689711318298.1689711319260 2023-07-18 20:15:19,294 DEBUG [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK], DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK], DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK]] 2023-07-18 20:15:19,296 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 20:15:19,297 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:19,297 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK] 2023-07-18 20:15:19,297 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK] 2023-07-18 20:15:19,297 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK] 2023-07-18 20:15:19,298 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK] 2023-07-18 20:15:19,298 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK] 2023-07-18 20:15:19,298 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK] 2023-07-18 20:15:19,304 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44825%2C1689711318298.meta, suffix=.meta, logDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,44825,1689711318298, archiveDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs, maxLogs=32 2023-07-18 20:15:19,309 INFO [RS:0;jenkins-hbase4:40503] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,40503,1689711318133/jenkins-hbase4.apache.org%2C40503%2C1689711318133.1689711319260 2023-07-18 20:15:19,309 INFO [RS:2;jenkins-hbase4:36795] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,36795,1689711318462/jenkins-hbase4.apache.org%2C36795%2C1689711318462.1689711319260 2023-07-18 20:15:19,310 DEBUG [RS:0;jenkins-hbase4:40503] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK], DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK], DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK]] 2023-07-18 20:15:19,311 DEBUG [RS:2;jenkins-hbase4:36795] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK], DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK], DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK]] 2023-07-18 20:15:19,325 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK] 2023-07-18 20:15:19,325 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK] 2023-07-18 20:15:19,325 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK] 2023-07-18 20:15:19,331 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,44825,1689711318298/jenkins-hbase4.apache.org%2C44825%2C1689711318298.meta.1689711319305.meta 2023-07-18 20:15:19,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33579,DS-9fda3b67-04de-45e9-9a1a-510cb40b5905,DISK], DatanodeInfoWithStorage[127.0.0.1:40187,DS-68d0d7b7-9a7a-48c2-ac79-2574caa3302e,DISK], DatanodeInfoWithStorage[127.0.0.1:33871,DS-6e0821fb-1fab-489b-91a6-28e642a955a3,DISK]] 2023-07-18 20:15:19,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:19,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:15:19,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 20:15:19,331 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 20:15:19,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 20:15:19,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:19,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 20:15:19,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 20:15:19,333 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:15:19,334 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/info 2023-07-18 20:15:19,334 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/info 2023-07-18 20:15:19,334 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:15:19,335 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:19,335 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:15:19,336 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:19,336 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:19,337 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:15:19,339 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:19,340 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:15:19,341 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/table 2023-07-18 20:15:19,341 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/table 2023-07-18 20:15:19,341 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:15:19,342 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:19,351 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740 2023-07-18 20:15:19,352 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740 2023-07-18 20:15:19,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:15:19,357 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:15:19,358 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10757432160, jitterRate=0.0018639415502548218}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:15:19,358 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:15:19,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689711319276 2023-07-18 20:15:19,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 20:15:19,365 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 20:15:19,365 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44825,1689711318298, state=OPEN 2023-07-18 20:15:19,367 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 20:15:19,368 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:15:19,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 20:15:19,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44825,1689711318298 in 251 msec 2023-07-18 20:15:19,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 20:15:19,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 414 msec 2023-07-18 20:15:19,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 523 msec 2023-07-18 20:15:19,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689711319375, completionTime=-1 2023-07-18 20:15:19,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 20:15:19,375 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 20:15:19,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 20:15:19,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689711379379 2023-07-18 20:15:19,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689711439379 2023-07-18 20:15:19,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-18 20:15:19,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41401,1689711317943-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41401,1689711317943-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41401,1689711317943-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41401, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 20:15:19,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:19,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 20:15:19,396 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:19,396 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 20:15:19,397 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:19,398 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,399 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41 empty. 2023-07-18 20:15:19,399 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,399 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 20:15:19,415 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:19,416 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => bacc985e3e2f993287ffc68ccedd8a41, NAME => 'hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp 2023-07-18 20:15:19,430 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:19,430 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing bacc985e3e2f993287ffc68ccedd8a41, disabling compactions & flushes 2023-07-18 20:15:19,430 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,430 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,430 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. after waiting 0 ms 2023-07-18 20:15:19,430 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,430 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,430 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for bacc985e3e2f993287ffc68ccedd8a41: 2023-07-18 20:15:19,433 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:19,433 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711319433"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711319433"}]},"ts":"1689711319433"} 2023-07-18 20:15:19,436 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:19,437 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:19,437 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711319437"}]},"ts":"1689711319437"} 2023-07-18 20:15:19,438 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 20:15:19,442 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:19,442 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:19,442 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:19,442 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:19,442 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:19,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bacc985e3e2f993287ffc68ccedd8a41, ASSIGN}] 2023-07-18 20:15:19,446 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bacc985e3e2f993287ffc68ccedd8a41, ASSIGN 2023-07-18 20:15:19,447 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=bacc985e3e2f993287ffc68ccedd8a41, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36795,1689711318462; forceNewPlan=false, retain=false 2023-07-18 20:15:19,462 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:19,464 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 20:15:19,465 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:19,466 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:19,467 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,468 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a empty. 2023-07-18 20:15:19,468 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,468 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 20:15:19,484 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:19,485 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 37ddee31644840a35a075d4f94e4588a, NAME => 'hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp 2023-07-18 20:15:19,498 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:19,498 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 37ddee31644840a35a075d4f94e4588a, disabling compactions & flushes 2023-07-18 20:15:19,498 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,498 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,498 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. after waiting 0 ms 2023-07-18 20:15:19,498 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,498 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,498 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 37ddee31644840a35a075d4f94e4588a: 2023-07-18 20:15:19,504 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:19,505 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711319505"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711319505"}]},"ts":"1689711319505"} 2023-07-18 20:15:19,507 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:19,507 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:19,508 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711319508"}]},"ts":"1689711319508"} 2023-07-18 20:15:19,509 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 20:15:19,512 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:19,512 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:19,512 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:19,512 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:19,512 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:19,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=37ddee31644840a35a075d4f94e4588a, ASSIGN}] 2023-07-18 20:15:19,513 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=37ddee31644840a35a075d4f94e4588a, ASSIGN 2023-07-18 20:15:19,514 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=37ddee31644840a35a075d4f94e4588a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36795,1689711318462; forceNewPlan=false, retain=false 2023-07-18 20:15:19,514 INFO [jenkins-hbase4:41401] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 20:15:19,516 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=bacc985e3e2f993287ffc68ccedd8a41, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,517 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711319516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711319516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711319516"}]},"ts":"1689711319516"} 2023-07-18 20:15:19,517 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=37ddee31644840a35a075d4f94e4588a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,517 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711319517"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711319517"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711319517"}]},"ts":"1689711319517"} 2023-07-18 20:15:19,518 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure bacc985e3e2f993287ffc68ccedd8a41, server=jenkins-hbase4.apache.org,36795,1689711318462}] 2023-07-18 20:15:19,519 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 37ddee31644840a35a075d4f94e4588a, server=jenkins-hbase4.apache.org,36795,1689711318462}] 2023-07-18 20:15:19,670 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,670 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:15:19,672 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52144, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:15:19,676 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 37ddee31644840a35a075d4f94e4588a, NAME => 'hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:19,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:15:19,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. service=MultiRowMutationService 2023-07-18 20:15:19,677 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 20:15:19,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:19,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,678 INFO [StoreOpener-37ddee31644840a35a075d4f94e4588a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,680 DEBUG [StoreOpener-37ddee31644840a35a075d4f94e4588a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/m 2023-07-18 20:15:19,680 DEBUG [StoreOpener-37ddee31644840a35a075d4f94e4588a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/m 2023-07-18 20:15:19,680 INFO [StoreOpener-37ddee31644840a35a075d4f94e4588a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 37ddee31644840a35a075d4f94e4588a columnFamilyName m 2023-07-18 20:15:19,681 INFO [StoreOpener-37ddee31644840a35a075d4f94e4588a-1] regionserver.HStore(310): Store=37ddee31644840a35a075d4f94e4588a/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:19,682 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,682 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:19,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:19,689 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 37ddee31644840a35a075d4f94e4588a; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@73ba116c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:19,689 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 37ddee31644840a35a075d4f94e4588a: 2023-07-18 20:15:19,689 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a., pid=9, masterSystemTime=1689711319670 2023-07-18 20:15:19,694 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,694 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:19,694 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,694 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bacc985e3e2f993287ffc68ccedd8a41, NAME => 'hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:19,694 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=37ddee31644840a35a075d4f94e4588a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,695 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711319694"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711319694"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711319694"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711319694"}]},"ts":"1689711319694"} 2023-07-18 20:15:19,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:19,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,696 INFO [StoreOpener-bacc985e3e2f993287ffc68ccedd8a41-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 20:15:19,698 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 37ddee31644840a35a075d4f94e4588a, server=jenkins-hbase4.apache.org,36795,1689711318462 in 177 msec 2023-07-18 20:15:19,698 DEBUG [StoreOpener-bacc985e3e2f993287ffc68ccedd8a41-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/info 2023-07-18 20:15:19,698 DEBUG [StoreOpener-bacc985e3e2f993287ffc68ccedd8a41-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/info 2023-07-18 20:15:19,698 INFO [StoreOpener-bacc985e3e2f993287ffc68ccedd8a41-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bacc985e3e2f993287ffc68ccedd8a41 columnFamilyName info 2023-07-18 20:15:19,699 INFO [StoreOpener-bacc985e3e2f993287ffc68ccedd8a41-1] regionserver.HStore(310): Store=bacc985e3e2f993287ffc68ccedd8a41/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:19,700 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-18 20:15:19,700 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=37ddee31644840a35a075d4f94e4588a, ASSIGN in 186 msec 2023-07-18 20:15:19,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,701 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:19,701 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711319701"}]},"ts":"1689711319701"} 2023-07-18 20:15:19,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,703 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 20:15:19,705 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:19,706 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:19,707 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 243 msec 2023-07-18 20:15:19,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:19,718 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bacc985e3e2f993287ffc68ccedd8a41; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10893286720, jitterRate=0.014516383409500122}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:19,718 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bacc985e3e2f993287ffc68ccedd8a41: 2023-07-18 20:15:19,719 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41., pid=8, masterSystemTime=1689711319670 2023-07-18 20:15:19,721 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,721 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:19,721 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=bacc985e3e2f993287ffc68ccedd8a41, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:19,721 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711319721"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711319721"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711319721"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711319721"}]},"ts":"1689711319721"} 2023-07-18 20:15:19,725 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-18 20:15:19,725 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure bacc985e3e2f993287ffc68ccedd8a41, server=jenkins-hbase4.apache.org,36795,1689711318462 in 205 msec 2023-07-18 20:15:19,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 20:15:19,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=bacc985e3e2f993287ffc68ccedd8a41, ASSIGN in 283 msec 2023-07-18 20:15:19,729 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:19,729 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711319729"}]},"ts":"1689711319729"} 2023-07-18 20:15:19,730 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 20:15:19,733 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:19,734 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 346 msec 2023-07-18 20:15:19,767 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:19,769 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52154, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:19,772 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 20:15:19,772 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 20:15:19,777 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:19,777 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:19,778 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:15:19,781 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41401,1689711317943] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 20:15:19,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 20:15:19,790 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:19,791 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:19,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 20:15:19,802 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:19,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-18 20:15:19,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 20:15:19,815 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:19,818 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-18 20:15:19,831 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 20:15:19,834 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 20:15:19,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.175sec 2023-07-18 20:15:19,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-18 20:15:19,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:19,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-18 20:15:19,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-18 20:15:19,837 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:19,838 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:19,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-18 20:15:19,839 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:19,840 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf empty. 2023-07-18 20:15:19,841 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:19,841 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-18 20:15:19,844 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-18 20:15:19,844 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-18 20:15:19,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:19,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 20:15:19,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 20:15:19,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41401,1689711317943-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 20:15:19,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41401,1689711317943-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 20:15:19,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 20:15:19,853 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:19,854 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => bd2317d5584e5026790dbd60aa6646bf, NAME => 'hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp 2023-07-18 20:15:19,855 DEBUG [Listener at localhost/37791] zookeeper.ReadOnlyZKClient(139): Connect 0x04f4a10d to 127.0.0.1:61189 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:19,865 DEBUG [Listener at localhost/37791] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@772353c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:19,870 DEBUG [hconnection-0x54e4bf8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:19,870 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:19,871 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing bd2317d5584e5026790dbd60aa6646bf, disabling compactions & flushes 2023-07-18 20:15:19,871 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:19,871 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:19,871 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. after waiting 0 ms 2023-07-18 20:15:19,871 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:19,871 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:19,871 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for bd2317d5584e5026790dbd60aa6646bf: 2023-07-18 20:15:19,872 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46168, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:19,874 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:19,874 INFO [Listener at localhost/37791] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:19,876 DEBUG [Listener at localhost/37791] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 20:15:19,877 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:19,878 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689711319877"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711319877"}]},"ts":"1689711319877"} 2023-07-18 20:15:19,879 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:19,880 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:19,880 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711319880"}]},"ts":"1689711319880"} 2023-07-18 20:15:19,881 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-18 20:15:19,883 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47718, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 20:15:19,886 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:19,886 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:19,886 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:19,886 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:19,886 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:19,886 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=bd2317d5584e5026790dbd60aa6646bf, ASSIGN}] 2023-07-18 20:15:19,887 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=bd2317d5584e5026790dbd60aa6646bf, ASSIGN 2023-07-18 20:15:19,888 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 20:15:19,888 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:19,888 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=bd2317d5584e5026790dbd60aa6646bf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40503,1689711318133; forceNewPlan=false, retain=false 2023-07-18 20:15:19,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 20:15:19,890 DEBUG [Listener at localhost/37791] zookeeper.ReadOnlyZKClient(139): Connect 0x6b7167b7 to 127.0.0.1:61189 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:19,896 DEBUG [Listener at localhost/37791] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c97e0aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:19,896 INFO [Listener at localhost/37791] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61189 2023-07-18 20:15:19,899 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:19,900 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017a1316a0000a connected 2023-07-18 20:15:19,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-18 20:15:19,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-18 20:15:19,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 20:15:19,917 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:19,920 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-18 20:15:20,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 20:15:20,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:20,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-18 20:15:20,019 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:20,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-18 20:15:20,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 20:15:20,021 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:20,022 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:15:20,024 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:20,026 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,027 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 empty. 2023-07-18 20:15:20,027 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,027 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 20:15:20,039 INFO [jenkins-hbase4:41401] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:20,040 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=bd2317d5584e5026790dbd60aa6646bf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:20,040 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689711320040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711320040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711320040"}]},"ts":"1689711320040"} 2023-07-18 20:15:20,042 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure bd2317d5584e5026790dbd60aa6646bf, server=jenkins-hbase4.apache.org,40503,1689711318133}] 2023-07-18 20:15:20,042 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:20,043 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1961f6560f9d4e246b7ceeb8213704e5, NAME => 'np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp 2023-07-18 20:15:20,052 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:20,052 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 1961f6560f9d4e246b7ceeb8213704e5, disabling compactions & flushes 2023-07-18 20:15:20,052 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,052 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,052 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. after waiting 0 ms 2023-07-18 20:15:20,052 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,052 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,053 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 1961f6560f9d4e246b7ceeb8213704e5: 2023-07-18 20:15:20,054 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:20,055 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711320055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711320055"}]},"ts":"1689711320055"} 2023-07-18 20:15:20,056 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:20,057 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:20,057 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711320057"}]},"ts":"1689711320057"} 2023-07-18 20:15:20,058 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-18 20:15:20,061 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:20,062 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:20,062 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:20,062 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:20,062 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:20,062 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, ASSIGN}] 2023-07-18 20:15:20,063 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, ASSIGN 2023-07-18 20:15:20,063 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36795,1689711318462; forceNewPlan=false, retain=false 2023-07-18 20:15:20,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 20:15:20,194 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:20,195 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:15:20,196 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52520, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:15:20,201 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:20,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bd2317d5584e5026790dbd60aa6646bf, NAME => 'hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:20,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:20,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,202 INFO [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,204 DEBUG [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf/q 2023-07-18 20:15:20,204 DEBUG [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf/q 2023-07-18 20:15:20,204 INFO [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bd2317d5584e5026790dbd60aa6646bf columnFamilyName q 2023-07-18 20:15:20,205 INFO [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] regionserver.HStore(310): Store=bd2317d5584e5026790dbd60aa6646bf/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:20,205 INFO [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,206 DEBUG [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf/u 2023-07-18 20:15:20,206 DEBUG [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf/u 2023-07-18 20:15:20,206 INFO [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bd2317d5584e5026790dbd60aa6646bf columnFamilyName u 2023-07-18 20:15:20,207 INFO [StoreOpener-bd2317d5584e5026790dbd60aa6646bf-1] regionserver.HStore(310): Store=bd2317d5584e5026790dbd60aa6646bf/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:20,208 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,208 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,209 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-18 20:15:20,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:20,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:20,213 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bd2317d5584e5026790dbd60aa6646bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11015508800, jitterRate=0.02589920163154602}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-18 20:15:20,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bd2317d5584e5026790dbd60aa6646bf: 2023-07-18 20:15:20,214 INFO [jenkins-hbase4:41401] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:20,215 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1961f6560f9d4e246b7ceeb8213704e5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:20,215 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf., pid=16, masterSystemTime=1689711320194 2023-07-18 20:15:20,215 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711320215"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711320215"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711320215"}]},"ts":"1689711320215"} 2023-07-18 20:15:20,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:20,218 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:20,219 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 1961f6560f9d4e246b7ceeb8213704e5, server=jenkins-hbase4.apache.org,36795,1689711318462}] 2023-07-18 20:15:20,221 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=bd2317d5584e5026790dbd60aa6646bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:20,221 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689711320221"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711320221"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711320221"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711320221"}]},"ts":"1689711320221"} 2023-07-18 20:15:20,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-18 20:15:20,224 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure bd2317d5584e5026790dbd60aa6646bf, server=jenkins-hbase4.apache.org,40503,1689711318133 in 180 msec 2023-07-18 20:15:20,225 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 20:15:20,225 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=bd2317d5584e5026790dbd60aa6646bf, ASSIGN in 338 msec 2023-07-18 20:15:20,226 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:20,226 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711320226"}]},"ts":"1689711320226"} 2023-07-18 20:15:20,227 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-18 20:15:20,229 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:20,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 394 msec 2023-07-18 20:15:20,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 20:15:20,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1961f6560f9d4e246b7ceeb8213704e5, NAME => 'np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:20,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:20,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,378 INFO [StoreOpener-1961f6560f9d4e246b7ceeb8213704e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,379 DEBUG [StoreOpener-1961f6560f9d4e246b7ceeb8213704e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/fam1 2023-07-18 20:15:20,379 DEBUG [StoreOpener-1961f6560f9d4e246b7ceeb8213704e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/fam1 2023-07-18 20:15:20,380 INFO [StoreOpener-1961f6560f9d4e246b7ceeb8213704e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1961f6560f9d4e246b7ceeb8213704e5 columnFamilyName fam1 2023-07-18 20:15:20,380 INFO [StoreOpener-1961f6560f9d4e246b7ceeb8213704e5-1] regionserver.HStore(310): Store=1961f6560f9d4e246b7ceeb8213704e5/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:20,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:20,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1961f6560f9d4e246b7ceeb8213704e5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10455479680, jitterRate=-0.026257574558258057}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:20,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1961f6560f9d4e246b7ceeb8213704e5: 2023-07-18 20:15:20,387 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5., pid=18, masterSystemTime=1689711320372 2023-07-18 20:15:20,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,389 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,389 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1961f6560f9d4e246b7ceeb8213704e5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:20,389 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711320389"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711320389"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711320389"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711320389"}]},"ts":"1689711320389"} 2023-07-18 20:15:20,392 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 20:15:20,392 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 1961f6560f9d4e246b7ceeb8213704e5, server=jenkins-hbase4.apache.org,36795,1689711318462 in 172 msec 2023-07-18 20:15:20,394 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-18 20:15:20,394 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, ASSIGN in 330 msec 2023-07-18 20:15:20,394 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:20,394 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711320394"}]},"ts":"1689711320394"} 2023-07-18 20:15:20,395 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-18 20:15:20,398 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:20,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 383 msec 2023-07-18 20:15:20,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 20:15:20,624 INFO [Listener at localhost/37791] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-18 20:15:20,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:20,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-18 20:15:20,630 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:20,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-18 20:15:20,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 20:15:20,646 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:20,647 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:20,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=21 msec 2023-07-18 20:15:20,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 20:15:20,735 INFO [Listener at localhost/37791] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-18 20:15:20,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:20,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:20,737 INFO [Listener at localhost/37791] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-18 20:15:20,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-18 20:15:20,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-18 20:15:20,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 20:15:20,741 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711320740"}]},"ts":"1689711320740"} 2023-07-18 20:15:20,742 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-18 20:15:20,743 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-18 20:15:20,744 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, UNASSIGN}] 2023-07-18 20:15:20,745 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, UNASSIGN 2023-07-18 20:15:20,745 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=1961f6560f9d4e246b7ceeb8213704e5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:20,745 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711320745"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711320745"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711320745"}]},"ts":"1689711320745"} 2023-07-18 20:15:20,747 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 1961f6560f9d4e246b7ceeb8213704e5, server=jenkins-hbase4.apache.org,36795,1689711318462}] 2023-07-18 20:15:20,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 20:15:20,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1961f6560f9d4e246b7ceeb8213704e5, disabling compactions & flushes 2023-07-18 20:15:20,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. after waiting 0 ms 2023-07-18 20:15:20,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:20,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5. 2023-07-18 20:15:20,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1961f6560f9d4e246b7ceeb8213704e5: 2023-07-18 20:15:20,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:20,909 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=1961f6560f9d4e246b7ceeb8213704e5, regionState=CLOSED 2023-07-18 20:15:20,909 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711320909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711320909"}]},"ts":"1689711320909"} 2023-07-18 20:15:20,912 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-18 20:15:20,912 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 1961f6560f9d4e246b7ceeb8213704e5, server=jenkins-hbase4.apache.org,36795,1689711318462 in 164 msec 2023-07-18 20:15:20,913 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 20:15:20,913 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=1961f6560f9d4e246b7ceeb8213704e5, UNASSIGN in 168 msec 2023-07-18 20:15:20,914 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711320914"}]},"ts":"1689711320914"} 2023-07-18 20:15:20,915 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-18 20:15:20,917 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-18 20:15:20,918 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 180 msec 2023-07-18 20:15:21,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 20:15:21,043 INFO [Listener at localhost/37791] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-18 20:15:21,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-18 20:15:21,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-18 20:15:21,046 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 20:15:21,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-18 20:15:21,047 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 20:15:21,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:21,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:15:21,050 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:21,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 20:15:21,052 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/fam1, FileablePath, hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/recovered.edits] 2023-07-18 20:15:21,056 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/recovered.edits/4.seqid to hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/archive/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5/recovered.edits/4.seqid 2023-07-18 20:15:21,057 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/.tmp/data/np1/table1/1961f6560f9d4e246b7ceeb8213704e5 2023-07-18 20:15:21,057 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 20:15:21,059 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 20:15:21,061 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-18 20:15:21,063 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-18 20:15:21,064 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 20:15:21,064 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-18 20:15:21,064 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711321064"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:21,065 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 20:15:21,065 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1961f6560f9d4e246b7ceeb8213704e5, NAME => 'np1:table1,,1689711320016.1961f6560f9d4e246b7ceeb8213704e5.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 20:15:21,065 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-18 20:15:21,065 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711321065"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:21,067 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-18 20:15:21,071 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 20:15:21,072 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 28 msec 2023-07-18 20:15:21,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 20:15:21,153 INFO [Listener at localhost/37791] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-18 20:15:21,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-18 20:15:21,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-18 20:15:21,168 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 20:15:21,170 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 20:15:21,173 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 20:15:21,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 20:15:21,174 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-18 20:15:21,174 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:21,176 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 20:15:21,178 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 20:15:21,179 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-18 20:15:21,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41401] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 20:15:21,275 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 20:15:21,276 INFO [Listener at localhost/37791] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 20:15:21,276 DEBUG [Listener at localhost/37791] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x04f4a10d to 127.0.0.1:61189 2023-07-18 20:15:21,276 DEBUG [Listener at localhost/37791] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,276 DEBUG [Listener at localhost/37791] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 20:15:21,276 DEBUG [Listener at localhost/37791] util.JVMClusterUtil(257): Found active master hash=1197181295, stopped=false 2023-07-18 20:15:21,276 DEBUG [Listener at localhost/37791] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 20:15:21,277 DEBUG [Listener at localhost/37791] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 20:15:21,277 DEBUG [Listener at localhost/37791] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-18 20:15:21,277 INFO [Listener at localhost/37791] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:21,278 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:21,278 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:21,278 INFO [Listener at localhost/37791] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 20:15:21,278 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:21,278 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:21,278 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:21,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:21,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:21,280 DEBUG [Listener at localhost/37791] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6721f9f6 to 127.0.0.1:61189 2023-07-18 20:15:21,280 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:21,280 DEBUG [Listener at localhost/37791] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,280 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:21,280 INFO [Listener at localhost/37791] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40503,1689711318133' ***** 2023-07-18 20:15:21,280 INFO [Listener at localhost/37791] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:21,281 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:21,282 INFO [Listener at localhost/37791] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44825,1689711318298' ***** 2023-07-18 20:15:21,282 INFO [Listener at localhost/37791] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:21,288 INFO [Listener at localhost/37791] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36795,1689711318462' ***** 2023-07-18 20:15:21,288 INFO [Listener at localhost/37791] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:21,288 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:21,288 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:21,289 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:21,295 INFO [RS:0;jenkins-hbase4:40503] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@ec73c16{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:21,297 INFO [RS:0;jenkins-hbase4:40503] server.AbstractConnector(383): Stopped ServerConnector@2e719b7b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:21,297 INFO [RS:0;jenkins-hbase4:40503] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:21,298 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:21,298 INFO [RS:0;jenkins-hbase4:40503] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@22cdd0c4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:21,298 INFO [RS:1;jenkins-hbase4:44825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5033e85c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:21,298 INFO [RS:2;jenkins-hbase4:36795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7bcb3b7d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:21,301 INFO [RS:0;jenkins-hbase4:40503] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6acb7487{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:21,301 INFO [RS:1;jenkins-hbase4:44825] server.AbstractConnector(383): Stopped ServerConnector@5766466b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:21,301 INFO [RS:2;jenkins-hbase4:36795] server.AbstractConnector(383): Stopped ServerConnector@17e2c0dc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:21,301 INFO [RS:1;jenkins-hbase4:44825] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:21,301 INFO [RS:2;jenkins-hbase4:36795] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:21,301 INFO [RS:2;jenkins-hbase4:36795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@37c4b42f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:21,301 INFO [RS:1;jenkins-hbase4:44825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7754bef0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:21,302 INFO [RS:2;jenkins-hbase4:36795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ee15bbe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:21,302 INFO [RS:1;jenkins-hbase4:44825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70c264{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:21,302 INFO [RS:2;jenkins-hbase4:36795] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:21,302 INFO [RS:2;jenkins-hbase4:36795] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:21,302 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:21,302 INFO [RS:2;jenkins-hbase4:36795] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:21,302 INFO [RS:1;jenkins-hbase4:44825] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:21,303 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(3305): Received CLOSE for bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:21,303 INFO [RS:1;jenkins-hbase4:44825] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:21,303 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:21,303 INFO [RS:1;jenkins-hbase4:44825] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:21,303 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(3305): Received CLOSE for 37ddee31644840a35a075d4f94e4588a 2023-07-18 20:15:21,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bacc985e3e2f993287ffc68ccedd8a41, disabling compactions & flushes 2023-07-18 20:15:21,304 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:21,304 INFO [RS:0;jenkins-hbase4:40503] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:21,303 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:21,304 INFO [RS:0;jenkins-hbase4:40503] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:21,304 DEBUG [RS:1;jenkins-hbase4:44825] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1250e6ba to 127.0.0.1:61189 2023-07-18 20:15:21,304 DEBUG [RS:1;jenkins-hbase4:44825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,304 DEBUG [RS:2;jenkins-hbase4:36795] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x56fc83ca to 127.0.0.1:61189 2023-07-18 20:15:21,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:21,304 DEBUG [RS:2;jenkins-hbase4:36795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,304 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 20:15:21,305 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1478): Online Regions={bacc985e3e2f993287ffc68ccedd8a41=hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41., 37ddee31644840a35a075d4f94e4588a=hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a.} 2023-07-18 20:15:21,305 DEBUG [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1504): Waiting on 37ddee31644840a35a075d4f94e4588a, bacc985e3e2f993287ffc68ccedd8a41 2023-07-18 20:15:21,304 INFO [RS:1;jenkins-hbase4:44825] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:21,305 INFO [RS:1;jenkins-hbase4:44825] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:21,305 INFO [RS:1;jenkins-hbase4:44825] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:21,305 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 20:15:21,304 INFO [RS:0;jenkins-hbase4:40503] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:21,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:21,305 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(3305): Received CLOSE for bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:21,305 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 20:15:21,305 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-18 20:15:21,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. after waiting 0 ms 2023-07-18 20:15:21,305 DEBUG [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-18 20:15:21,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:21,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing bacc985e3e2f993287ffc68ccedd8a41 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-18 20:15:21,306 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:15:21,306 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:21,309 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:15:21,309 DEBUG [RS:0;jenkins-hbase4:40503] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4f21aee3 to 127.0.0.1:61189 2023-07-18 20:15:21,309 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:21,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bd2317d5584e5026790dbd60aa6646bf, disabling compactions & flushes 2023-07-18 20:15:21,310 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:21,309 DEBUG [RS:0;jenkins-hbase4:40503] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,310 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 20:15:21,311 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1478): Online Regions={bd2317d5584e5026790dbd60aa6646bf=hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf.} 2023-07-18 20:15:21,311 DEBUG [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1504): Waiting on bd2317d5584e5026790dbd60aa6646bf 2023-07-18 20:15:21,309 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:15:21,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:15:21,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:21,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:21,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. after waiting 0 ms 2023-07-18 20:15:21,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:21,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:15:21,312 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-18 20:15:21,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/quota/bd2317d5584e5026790dbd60aa6646bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:21,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:21,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bd2317d5584e5026790dbd60aa6646bf: 2023-07-18 20:15:21,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689711319835.bd2317d5584e5026790dbd60aa6646bf. 2023-07-18 20:15:21,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/.tmp/info/f4d2155eb3634fe98485d15d2f84204b 2023-07-18 20:15:21,335 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/.tmp/info/13572836663e499c9fb1fbbb729c9884 2023-07-18 20:15:21,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f4d2155eb3634fe98485d15d2f84204b 2023-07-18 20:15:21,345 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/.tmp/info/f4d2155eb3634fe98485d15d2f84204b as hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/info/f4d2155eb3634fe98485d15d2f84204b 2023-07-18 20:15:21,352 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f4d2155eb3634fe98485d15d2f84204b 2023-07-18 20:15:21,352 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/info/f4d2155eb3634fe98485d15d2f84204b, entries=3, sequenceid=8, filesize=5.0 K 2023-07-18 20:15:21,353 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 13572836663e499c9fb1fbbb729c9884 2023-07-18 20:15:21,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for bacc985e3e2f993287ffc68ccedd8a41 in 48ms, sequenceid=8, compaction requested=false 2023-07-18 20:15:21,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 20:15:21,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/namespace/bacc985e3e2f993287ffc68ccedd8a41/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-18 20:15:21,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:21,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bacc985e3e2f993287ffc68ccedd8a41: 2023-07-18 20:15:21,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689711319387.bacc985e3e2f993287ffc68ccedd8a41. 2023-07-18 20:15:21,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 37ddee31644840a35a075d4f94e4588a, disabling compactions & flushes 2023-07-18 20:15:21,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:21,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:21,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. after waiting 0 ms 2023-07-18 20:15:21,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:21,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 37ddee31644840a35a075d4f94e4588a 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-18 20:15:21,383 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/.tmp/rep_barrier/97b5c421a1624399ad21998b1922e38e 2023-07-18 20:15:21,388 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 97b5c421a1624399ad21998b1922e38e 2023-07-18 20:15:21,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/.tmp/m/4d92ec6837e64f968d9855726d842e78 2023-07-18 20:15:21,406 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/.tmp/table/91554d0936654cc4a4581506abf7a5d5 2023-07-18 20:15:21,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/.tmp/m/4d92ec6837e64f968d9855726d842e78 as hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/m/4d92ec6837e64f968d9855726d842e78 2023-07-18 20:15:21,416 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91554d0936654cc4a4581506abf7a5d5 2023-07-18 20:15:21,417 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/.tmp/info/13572836663e499c9fb1fbbb729c9884 as hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/info/13572836663e499c9fb1fbbb729c9884 2023-07-18 20:15:21,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/m/4d92ec6837e64f968d9855726d842e78, entries=1, sequenceid=7, filesize=4.9 K 2023-07-18 20:15:21,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 37ddee31644840a35a075d4f94e4588a in 56ms, sequenceid=7, compaction requested=false 2023-07-18 20:15:21,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 20:15:21,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 13572836663e499c9fb1fbbb729c9884 2023-07-18 20:15:21,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/info/13572836663e499c9fb1fbbb729c9884, entries=32, sequenceid=31, filesize=8.5 K 2023-07-18 20:15:21,427 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/.tmp/rep_barrier/97b5c421a1624399ad21998b1922e38e as hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/rep_barrier/97b5c421a1624399ad21998b1922e38e 2023-07-18 20:15:21,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/rsgroup/37ddee31644840a35a075d4f94e4588a/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-18 20:15:21,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:21,428 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:21,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 37ddee31644840a35a075d4f94e4588a: 2023-07-18 20:15:21,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689711319462.37ddee31644840a35a075d4f94e4588a. 2023-07-18 20:15:21,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 97b5c421a1624399ad21998b1922e38e 2023-07-18 20:15:21,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/rep_barrier/97b5c421a1624399ad21998b1922e38e, entries=1, sequenceid=31, filesize=4.9 K 2023-07-18 20:15:21,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/.tmp/table/91554d0936654cc4a4581506abf7a5d5 as hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/table/91554d0936654cc4a4581506abf7a5d5 2023-07-18 20:15:21,439 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91554d0936654cc4a4581506abf7a5d5 2023-07-18 20:15:21,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/table/91554d0936654cc4a4581506abf7a5d5, entries=8, sequenceid=31, filesize=5.2 K 2023-07-18 20:15:21,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 128ms, sequenceid=31, compaction requested=false 2023-07-18 20:15:21,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 20:15:21,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-18 20:15:21,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:21,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:21,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:15:21,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:21,505 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36795,1689711318462; all regions closed. 2023-07-18 20:15:21,505 DEBUG [RS:2;jenkins-hbase4:36795] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 20:15:21,505 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44825,1689711318298; all regions closed. 2023-07-18 20:15:21,506 DEBUG [RS:1;jenkins-hbase4:44825] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 20:15:21,511 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40503,1689711318133; all regions closed. 2023-07-18 20:15:21,511 DEBUG [RS:0;jenkins-hbase4:40503] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 20:15:21,515 DEBUG [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs 2023-07-18 20:15:21,516 INFO [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44825%2C1689711318298.meta:.meta(num 1689711319305) 2023-07-18 20:15:21,516 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/WALs/jenkins-hbase4.apache.org,40503,1689711318133/jenkins-hbase4.apache.org%2C40503%2C1689711318133.1689711319260 not finished, retry = 0 2023-07-18 20:15:21,516 DEBUG [RS:2;jenkins-hbase4:36795] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs 2023-07-18 20:15:21,516 INFO [RS:2;jenkins-hbase4:36795] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36795%2C1689711318462:(num 1689711319260) 2023-07-18 20:15:21,516 DEBUG [RS:2;jenkins-hbase4:36795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,516 INFO [RS:2;jenkins-hbase4:36795] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:21,516 INFO [RS:2;jenkins-hbase4:36795] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:21,516 INFO [RS:2;jenkins-hbase4:36795] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:21,516 INFO [RS:2;jenkins-hbase4:36795] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:21,516 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:21,517 INFO [RS:2;jenkins-hbase4:36795] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:21,519 INFO [RS:2;jenkins-hbase4:36795] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36795 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36795,1689711318462 2023-07-18 20:15:21,523 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:21,524 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36795,1689711318462] 2023-07-18 20:15:21,524 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36795,1689711318462; numProcessing=1 2023-07-18 20:15:21,524 DEBUG [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs 2023-07-18 20:15:21,524 INFO [RS:1;jenkins-hbase4:44825] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44825%2C1689711318298:(num 1689711319260) 2023-07-18 20:15:21,524 DEBUG [RS:1;jenkins-hbase4:44825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,524 INFO [RS:1;jenkins-hbase4:44825] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:21,525 INFO [RS:1;jenkins-hbase4:44825] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:21,525 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:21,525 INFO [RS:1;jenkins-hbase4:44825] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44825 2023-07-18 20:15:21,526 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36795,1689711318462 already deleted, retry=false 2023-07-18 20:15:21,526 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36795,1689711318462 expired; onlineServers=2 2023-07-18 20:15:21,528 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:21,528 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44825,1689711318298 2023-07-18 20:15:21,528 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:21,531 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44825,1689711318298] 2023-07-18 20:15:21,531 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44825,1689711318298; numProcessing=2 2023-07-18 20:15:21,532 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44825,1689711318298 already deleted, retry=false 2023-07-18 20:15:21,532 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44825,1689711318298 expired; onlineServers=1 2023-07-18 20:15:21,619 DEBUG [RS:0;jenkins-hbase4:40503] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/oldWALs 2023-07-18 20:15:21,619 INFO [RS:0;jenkins-hbase4:40503] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40503%2C1689711318133:(num 1689711319260) 2023-07-18 20:15:21,619 DEBUG [RS:0;jenkins-hbase4:40503] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,619 INFO [RS:0;jenkins-hbase4:40503] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:21,619 INFO [RS:0;jenkins-hbase4:40503] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:21,619 INFO [RS:0;jenkins-hbase4:40503] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:21,619 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:21,619 INFO [RS:0;jenkins-hbase4:40503] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:21,619 INFO [RS:0;jenkins-hbase4:40503] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:21,620 INFO [RS:0;jenkins-hbase4:40503] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40503 2023-07-18 20:15:21,624 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40503,1689711318133 2023-07-18 20:15:21,624 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:21,625 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40503,1689711318133] 2023-07-18 20:15:21,625 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40503,1689711318133; numProcessing=3 2023-07-18 20:15:21,627 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40503,1689711318133 already deleted, retry=false 2023-07-18 20:15:21,627 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40503,1689711318133 expired; onlineServers=0 2023-07-18 20:15:21,627 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41401,1689711317943' ***** 2023-07-18 20:15:21,627 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 20:15:21,628 DEBUG [M:0;jenkins-hbase4:41401] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4364055c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:21,628 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:21,629 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:21,629 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:21,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:21,630 INFO [M:0;jenkins-hbase4:41401] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@37c3b5a4{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 20:15:21,630 INFO [M:0;jenkins-hbase4:41401] server.AbstractConnector(383): Stopped ServerConnector@1226f160{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:21,630 INFO [M:0;jenkins-hbase4:41401] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:21,631 INFO [M:0;jenkins-hbase4:41401] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ffe8847{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:21,631 INFO [M:0;jenkins-hbase4:41401] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39bac69f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:21,631 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41401,1689711317943 2023-07-18 20:15:21,631 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41401,1689711317943; all regions closed. 2023-07-18 20:15:21,631 DEBUG [M:0;jenkins-hbase4:41401] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:21,631 INFO [M:0;jenkins-hbase4:41401] master.HMaster(1491): Stopping master jetty server 2023-07-18 20:15:21,632 INFO [M:0;jenkins-hbase4:41401] server.AbstractConnector(383): Stopped ServerConnector@248326c2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:21,632 DEBUG [M:0;jenkins-hbase4:41401] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 20:15:21,632 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 20:15:21,632 DEBUG [M:0;jenkins-hbase4:41401] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 20:15:21,632 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711318883] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711318883,5,FailOnTimeoutGroup] 2023-07-18 20:15:21,633 INFO [M:0;jenkins-hbase4:41401] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 20:15:21,632 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711318877] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711318877,5,FailOnTimeoutGroup] 2023-07-18 20:15:21,633 INFO [M:0;jenkins-hbase4:41401] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 20:15:21,634 INFO [M:0;jenkins-hbase4:41401] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:21,634 DEBUG [M:0;jenkins-hbase4:41401] master.HMaster(1512): Stopping service threads 2023-07-18 20:15:21,634 INFO [M:0;jenkins-hbase4:41401] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 20:15:21,634 ERROR [M:0;jenkins-hbase4:41401] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 20:15:21,634 INFO [M:0;jenkins-hbase4:41401] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 20:15:21,634 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 20:15:21,635 DEBUG [M:0;jenkins-hbase4:41401] zookeeper.ZKUtil(398): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 20:15:21,635 WARN [M:0;jenkins-hbase4:41401] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 20:15:21,635 INFO [M:0;jenkins-hbase4:41401] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 20:15:21,635 INFO [M:0;jenkins-hbase4:41401] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 20:15:21,635 DEBUG [M:0;jenkins-hbase4:41401] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 20:15:21,636 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:21,636 DEBUG [M:0;jenkins-hbase4:41401] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:21,636 DEBUG [M:0;jenkins-hbase4:41401] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 20:15:21,636 DEBUG [M:0;jenkins-hbase4:41401] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:21,636 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.15 KB 2023-07-18 20:15:21,653 INFO [M:0;jenkins-hbase4:41401] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/afc0196a53b944ad89997e0c2244f07a 2023-07-18 20:15:21,659 DEBUG [M:0;jenkins-hbase4:41401] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/afc0196a53b944ad89997e0c2244f07a as hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/afc0196a53b944ad89997e0c2244f07a 2023-07-18 20:15:21,665 INFO [M:0;jenkins-hbase4:41401] regionserver.HStore(1080): Added hdfs://localhost:44781/user/jenkins/test-data/c9db745f-55b1-c360-2db9-7efa7f3e2bc4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/afc0196a53b944ad89997e0c2244f07a, entries=24, sequenceid=194, filesize=12.4 K 2023-07-18 20:15:21,665 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95237, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=194, compaction requested=false 2023-07-18 20:15:21,667 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:21,667 DEBUG [M:0;jenkins-hbase4:41401] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:21,671 INFO [M:0;jenkins-hbase4:41401] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 20:15:21,671 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:21,672 INFO [M:0;jenkins-hbase4:41401] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41401 2023-07-18 20:15:21,673 DEBUG [M:0;jenkins-hbase4:41401] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41401,1689711317943 already deleted, retry=false 2023-07-18 20:15:21,879 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:21,879 INFO [M:0;jenkins-hbase4:41401] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41401,1689711317943; zookeeper connection closed. 2023-07-18 20:15:21,879 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): master:41401-0x1017a1316a00000, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:21,980 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:21,980 INFO [RS:0;jenkins-hbase4:40503] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40503,1689711318133; zookeeper connection closed. 2023-07-18 20:15:21,980 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:40503-0x1017a1316a00001, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:21,981 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1213f8b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1213f8b 2023-07-18 20:15:22,080 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:22,080 INFO [RS:1;jenkins-hbase4:44825] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44825,1689711318298; zookeeper connection closed. 2023-07-18 20:15:22,080 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:44825-0x1017a1316a00002, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:22,080 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c8986b7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c8986b7 2023-07-18 20:15:22,180 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:22,180 INFO [RS:2;jenkins-hbase4:36795] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36795,1689711318462; zookeeper connection closed. 2023-07-18 20:15:22,180 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): regionserver:36795-0x1017a1316a00003, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:22,181 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@fc69d1f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@fc69d1f 2023-07-18 20:15:22,181 INFO [Listener at localhost/37791] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-18 20:15:22,181 WARN [Listener at localhost/37791] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:22,185 INFO [Listener at localhost/37791] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:22,290 WARN [BP-1409353883-172.31.14.131-1689711316996 heartbeating to localhost/127.0.0.1:44781] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:22,290 WARN [BP-1409353883-172.31.14.131-1689711316996 heartbeating to localhost/127.0.0.1:44781] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1409353883-172.31.14.131-1689711316996 (Datanode Uuid 211f74fc-3c9b-4057-bda6-d849ca9b77d7) service to localhost/127.0.0.1:44781 2023-07-18 20:15:22,291 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/dfs/data/data5/current/BP-1409353883-172.31.14.131-1689711316996] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:22,291 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/dfs/data/data6/current/BP-1409353883-172.31.14.131-1689711316996] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:22,293 WARN [Listener at localhost/37791] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:22,296 INFO [Listener at localhost/37791] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:22,403 WARN [BP-1409353883-172.31.14.131-1689711316996 heartbeating to localhost/127.0.0.1:44781] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:22,403 WARN [BP-1409353883-172.31.14.131-1689711316996 heartbeating to localhost/127.0.0.1:44781] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1409353883-172.31.14.131-1689711316996 (Datanode Uuid aa71a145-df57-4013-b039-6c2f2f91adc8) service to localhost/127.0.0.1:44781 2023-07-18 20:15:22,403 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/dfs/data/data3/current/BP-1409353883-172.31.14.131-1689711316996] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:22,404 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/dfs/data/data4/current/BP-1409353883-172.31.14.131-1689711316996] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:22,405 WARN [Listener at localhost/37791] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:22,407 INFO [Listener at localhost/37791] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:22,509 WARN [BP-1409353883-172.31.14.131-1689711316996 heartbeating to localhost/127.0.0.1:44781] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:22,509 WARN [BP-1409353883-172.31.14.131-1689711316996 heartbeating to localhost/127.0.0.1:44781] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1409353883-172.31.14.131-1689711316996 (Datanode Uuid d222239b-3c58-466c-874a-0802d7d6daf2) service to localhost/127.0.0.1:44781 2023-07-18 20:15:22,510 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/dfs/data/data1/current/BP-1409353883-172.31.14.131-1689711316996] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:22,510 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/cluster_e56e151a-6e00-f4c6-3781-f4475d14b930/dfs/data/data2/current/BP-1409353883-172.31.14.131-1689711316996] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:22,519 INFO [Listener at localhost/37791] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:22,635 INFO [Listener at localhost/37791] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.log.dir so I do NOT create it in target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/55b61599-f132-0d4a-e1d3-ed5a75848f57/hadoop.tmp.dir so I do NOT create it in target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff, deleteOnExit=true 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/test.cache.data in system properties and HBase conf 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 20:15:22,662 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 20:15:22,663 DEBUG [Listener at localhost/37791] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 20:15:22,663 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/nfs.dump.dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 20:15:22,664 INFO [Listener at localhost/37791] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 20:15:22,669 WARN [Listener at localhost/37791] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 20:15:22,669 WARN [Listener at localhost/37791] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 20:15:22,714 WARN [Listener at localhost/37791] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:22,716 INFO [Listener at localhost/37791] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:22,720 INFO [Listener at localhost/37791] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/Jetty_localhost_34011_hdfs____osxuyh/webapp 2023-07-18 20:15:22,733 DEBUG [Listener at localhost/37791-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017a1316a0000a, quorum=127.0.0.1:61189, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 20:15:22,734 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017a1316a0000a, quorum=127.0.0.1:61189, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 20:15:22,812 INFO [Listener at localhost/37791] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34011 2023-07-18 20:15:22,816 WARN [Listener at localhost/37791] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 20:15:22,816 WARN [Listener at localhost/37791] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 20:15:22,855 WARN [Listener at localhost/40885] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:22,866 WARN [Listener at localhost/40885] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:15:22,868 WARN [Listener at localhost/40885] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:22,869 INFO [Listener at localhost/40885] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:22,875 INFO [Listener at localhost/40885] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/Jetty_localhost_34237_datanode____itwsxo/webapp 2023-07-18 20:15:22,967 INFO [Listener at localhost/40885] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34237 2023-07-18 20:15:22,975 WARN [Listener at localhost/34161] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:22,991 WARN [Listener at localhost/34161] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:15:22,993 WARN [Listener at localhost/34161] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:22,994 INFO [Listener at localhost/34161] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:22,999 INFO [Listener at localhost/34161] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/Jetty_localhost_36323_datanode____.gdtrmw/webapp 2023-07-18 20:15:23,087 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x52c851d65a48e8d9: Processing first storage report for DS-d0a20b63-2ea1-49cc-bff7-01782339aff3 from datanode 42e721a5-9bc2-410c-b8a1-978b25a19f92 2023-07-18 20:15:23,087 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x52c851d65a48e8d9: from storage DS-d0a20b63-2ea1-49cc-bff7-01782339aff3 node DatanodeRegistration(127.0.0.1:41671, datanodeUuid=42e721a5-9bc2-410c-b8a1-978b25a19f92, infoPort=36843, infoSecurePort=0, ipcPort=34161, storageInfo=lv=-57;cid=testClusterID;nsid=1961620028;c=1689711322671), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:23,087 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x52c851d65a48e8d9: Processing first storage report for DS-375aaa2f-66f8-4798-9176-7eae9fc5b8f6 from datanode 42e721a5-9bc2-410c-b8a1-978b25a19f92 2023-07-18 20:15:23,087 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x52c851d65a48e8d9: from storage DS-375aaa2f-66f8-4798-9176-7eae9fc5b8f6 node DatanodeRegistration(127.0.0.1:41671, datanodeUuid=42e721a5-9bc2-410c-b8a1-978b25a19f92, infoPort=36843, infoSecurePort=0, ipcPort=34161, storageInfo=lv=-57;cid=testClusterID;nsid=1961620028;c=1689711322671), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:23,103 INFO [Listener at localhost/34161] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36323 2023-07-18 20:15:23,114 WARN [Listener at localhost/41877] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:23,134 WARN [Listener at localhost/41877] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 20:15:23,136 WARN [Listener at localhost/41877] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 20:15:23,137 INFO [Listener at localhost/41877] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 20:15:23,141 INFO [Listener at localhost/41877] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/Jetty_localhost_41709_datanode____.6if06u/webapp 2023-07-18 20:15:23,217 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa73a9c69b0f853d1: Processing first storage report for DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8 from datanode 9906c981-572b-4687-b3ee-323f61361fcc 2023-07-18 20:15:23,217 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa73a9c69b0f853d1: from storage DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8 node DatanodeRegistration(127.0.0.1:39419, datanodeUuid=9906c981-572b-4687-b3ee-323f61361fcc, infoPort=42765, infoSecurePort=0, ipcPort=41877, storageInfo=lv=-57;cid=testClusterID;nsid=1961620028;c=1689711322671), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:23,217 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa73a9c69b0f853d1: Processing first storage report for DS-80fca947-a275-4252-888b-04e6bff79d1b from datanode 9906c981-572b-4687-b3ee-323f61361fcc 2023-07-18 20:15:23,218 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa73a9c69b0f853d1: from storage DS-80fca947-a275-4252-888b-04e6bff79d1b node DatanodeRegistration(127.0.0.1:39419, datanodeUuid=9906c981-572b-4687-b3ee-323f61361fcc, infoPort=42765, infoSecurePort=0, ipcPort=41877, storageInfo=lv=-57;cid=testClusterID;nsid=1961620028;c=1689711322671), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:23,243 INFO [Listener at localhost/41877] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41709 2023-07-18 20:15:23,250 WARN [Listener at localhost/43545] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 20:15:23,342 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x567c1d684129057c: Processing first storage report for DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58 from datanode 5d207432-a59d-4781-a846-b4ceb46f6f3b 2023-07-18 20:15:23,342 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x567c1d684129057c: from storage DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58 node DatanodeRegistration(127.0.0.1:40411, datanodeUuid=5d207432-a59d-4781-a846-b4ceb46f6f3b, infoPort=33897, infoSecurePort=0, ipcPort=43545, storageInfo=lv=-57;cid=testClusterID;nsid=1961620028;c=1689711322671), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:23,342 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x567c1d684129057c: Processing first storage report for DS-675e70ef-23f4-43cc-beeb-204b4029302a from datanode 5d207432-a59d-4781-a846-b4ceb46f6f3b 2023-07-18 20:15:23,342 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x567c1d684129057c: from storage DS-675e70ef-23f4-43cc-beeb-204b4029302a node DatanodeRegistration(127.0.0.1:40411, datanodeUuid=5d207432-a59d-4781-a846-b4ceb46f6f3b, infoPort=33897, infoSecurePort=0, ipcPort=43545, storageInfo=lv=-57;cid=testClusterID;nsid=1961620028;c=1689711322671), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 20:15:23,359 DEBUG [Listener at localhost/43545] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5 2023-07-18 20:15:23,363 INFO [Listener at localhost/43545] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/zookeeper_0, clientPort=57108, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 20:15:23,364 INFO [Listener at localhost/43545] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57108 2023-07-18 20:15:23,364 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,365 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,379 INFO [Listener at localhost/43545] util.FSUtils(471): Created version file at hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499 with version=8 2023-07-18 20:15:23,380 INFO [Listener at localhost/43545] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37087/user/jenkins/test-data/295cb58d-54fa-6a78-e54d-4197d104cf67/hbase-staging 2023-07-18 20:15:23,380 DEBUG [Listener at localhost/43545] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 20:15:23,380 DEBUG [Listener at localhost/43545] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 20:15:23,381 DEBUG [Listener at localhost/43545] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 20:15:23,381 DEBUG [Listener at localhost/43545] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 20:15:23,381 INFO [Listener at localhost/43545] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:23,381 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,382 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,382 INFO [Listener at localhost/43545] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:23,382 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,382 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:23,382 INFO [Listener at localhost/43545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:23,384 INFO [Listener at localhost/43545] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41751 2023-07-18 20:15:23,384 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,385 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,386 INFO [Listener at localhost/43545] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41751 connecting to ZooKeeper ensemble=127.0.0.1:57108 2023-07-18 20:15:23,394 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:417510x0, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:23,395 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41751-0x1017a132be30000 connected 2023-07-18 20:15:23,412 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:23,412 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:23,413 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:23,415 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41751 2023-07-18 20:15:23,415 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41751 2023-07-18 20:15:23,415 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41751 2023-07-18 20:15:23,415 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41751 2023-07-18 20:15:23,416 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41751 2023-07-18 20:15:23,417 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:23,417 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:23,417 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:23,418 INFO [Listener at localhost/43545] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 20:15:23,418 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:23,418 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:23,419 INFO [Listener at localhost/43545] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:23,419 INFO [Listener at localhost/43545] http.HttpServer(1146): Jetty bound to port 34953 2023-07-18 20:15:23,419 INFO [Listener at localhost/43545] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:23,423 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,423 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@27ece686{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:23,423 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,424 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f0d357d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:23,537 INFO [Listener at localhost/43545] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:23,538 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:23,538 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:23,538 INFO [Listener at localhost/43545] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:15:23,539 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,540 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a4e5625{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/jetty-0_0_0_0-34953-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3723792847968814033/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 20:15:23,541 INFO [Listener at localhost/43545] server.AbstractConnector(333): Started ServerConnector@2fb99c35{HTTP/1.1, (http/1.1)}{0.0.0.0:34953} 2023-07-18 20:15:23,542 INFO [Listener at localhost/43545] server.Server(415): Started @42831ms 2023-07-18 20:15:23,542 INFO [Listener at localhost/43545] master.HMaster(444): hbase.rootdir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499, hbase.cluster.distributed=false 2023-07-18 20:15:23,555 INFO [Listener at localhost/43545] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:23,556 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,556 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,556 INFO [Listener at localhost/43545] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:23,556 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,556 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:23,556 INFO [Listener at localhost/43545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:23,557 INFO [Listener at localhost/43545] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43051 2023-07-18 20:15:23,557 INFO [Listener at localhost/43545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:23,558 DEBUG [Listener at localhost/43545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:23,558 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,559 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,560 INFO [Listener at localhost/43545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43051 connecting to ZooKeeper ensemble=127.0.0.1:57108 2023-07-18 20:15:23,565 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:430510x0, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:23,567 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43051-0x1017a132be30001 connected 2023-07-18 20:15:23,567 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:23,567 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:23,568 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:23,568 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43051 2023-07-18 20:15:23,569 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43051 2023-07-18 20:15:23,569 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43051 2023-07-18 20:15:23,574 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43051 2023-07-18 20:15:23,574 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43051 2023-07-18 20:15:23,576 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:23,576 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:23,576 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:23,576 INFO [Listener at localhost/43545] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:23,576 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:23,576 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:23,577 INFO [Listener at localhost/43545] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:23,577 INFO [Listener at localhost/43545] http.HttpServer(1146): Jetty bound to port 44461 2023-07-18 20:15:23,577 INFO [Listener at localhost/43545] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:23,578 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,579 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d3b5daa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:23,579 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,579 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2034b22a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:23,700 INFO [Listener at localhost/43545] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:23,701 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:23,701 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:23,702 INFO [Listener at localhost/43545] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:15:23,703 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,704 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b5c9b44{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/jetty-0_0_0_0-44461-hbase-server-2_4_18-SNAPSHOT_jar-_-any-778294717589994871/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:23,705 INFO [Listener at localhost/43545] server.AbstractConnector(333): Started ServerConnector@e1f8b2a{HTTP/1.1, (http/1.1)}{0.0.0.0:44461} 2023-07-18 20:15:23,705 INFO [Listener at localhost/43545] server.Server(415): Started @42995ms 2023-07-18 20:15:23,717 INFO [Listener at localhost/43545] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:23,717 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,717 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,717 INFO [Listener at localhost/43545] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:23,717 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,717 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:23,718 INFO [Listener at localhost/43545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:23,718 INFO [Listener at localhost/43545] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33727 2023-07-18 20:15:23,719 INFO [Listener at localhost/43545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:23,720 DEBUG [Listener at localhost/43545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:23,720 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,721 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,722 INFO [Listener at localhost/43545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33727 connecting to ZooKeeper ensemble=127.0.0.1:57108 2023-07-18 20:15:23,725 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:337270x0, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:23,727 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33727-0x1017a132be30002 connected 2023-07-18 20:15:23,727 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:23,728 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:23,728 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:23,731 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33727 2023-07-18 20:15:23,731 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33727 2023-07-18 20:15:23,733 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33727 2023-07-18 20:15:23,734 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33727 2023-07-18 20:15:23,734 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33727 2023-07-18 20:15:23,736 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:23,736 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:23,736 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:23,736 INFO [Listener at localhost/43545] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:23,736 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:23,736 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:23,737 INFO [Listener at localhost/43545] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:23,737 INFO [Listener at localhost/43545] http.HttpServer(1146): Jetty bound to port 40615 2023-07-18 20:15:23,737 INFO [Listener at localhost/43545] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:23,739 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,739 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72be832f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:23,739 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,740 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35af6de7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:23,861 INFO [Listener at localhost/43545] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:23,862 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:23,862 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:23,862 INFO [Listener at localhost/43545] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:15:23,863 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,864 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1ed60418{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/jetty-0_0_0_0-40615-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7589499557345934260/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:23,867 INFO [Listener at localhost/43545] server.AbstractConnector(333): Started ServerConnector@4bbe91f1{HTTP/1.1, (http/1.1)}{0.0.0.0:40615} 2023-07-18 20:15:23,867 INFO [Listener at localhost/43545] server.Server(415): Started @43157ms 2023-07-18 20:15:23,882 INFO [Listener at localhost/43545] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:23,883 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,883 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,883 INFO [Listener at localhost/43545] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:23,883 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:23,883 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:23,883 INFO [Listener at localhost/43545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:23,884 INFO [Listener at localhost/43545] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38655 2023-07-18 20:15:23,885 INFO [Listener at localhost/43545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:23,896 DEBUG [Listener at localhost/43545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:23,897 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,898 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:23,899 INFO [Listener at localhost/43545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38655 connecting to ZooKeeper ensemble=127.0.0.1:57108 2023-07-18 20:15:23,910 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:386550x0, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:23,911 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:386550x0, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:23,911 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:386550x0, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:23,912 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:386550x0, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:23,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38655-0x1017a132be30003 connected 2023-07-18 20:15:23,917 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38655 2023-07-18 20:15:23,917 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38655 2023-07-18 20:15:23,922 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38655 2023-07-18 20:15:23,934 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38655 2023-07-18 20:15:23,935 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38655 2023-07-18 20:15:23,937 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:23,937 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:23,938 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:23,938 INFO [Listener at localhost/43545] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:23,938 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:23,938 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:23,939 INFO [Listener at localhost/43545] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:23,939 INFO [Listener at localhost/43545] http.HttpServer(1146): Jetty bound to port 45065 2023-07-18 20:15:23,939 INFO [Listener at localhost/43545] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:23,993 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,993 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41854f52{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:23,994 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:23,994 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3ed2d6d6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:24,111 INFO [Listener at localhost/43545] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:24,111 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:24,112 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:24,112 INFO [Listener at localhost/43545] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 20:15:24,113 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:24,113 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4981e3b1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/jetty-0_0_0_0-45065-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6724000264931864613/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:24,115 INFO [Listener at localhost/43545] server.AbstractConnector(333): Started ServerConnector@795d3fe0{HTTP/1.1, (http/1.1)}{0.0.0.0:45065} 2023-07-18 20:15:24,115 INFO [Listener at localhost/43545] server.Server(415): Started @43405ms 2023-07-18 20:15:24,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:24,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@26b927f9{HTTP/1.1, (http/1.1)}{0.0.0.0:46619} 2023-07-18 20:15:24,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43410ms 2023-07-18 20:15:24,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,122 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 20:15:24,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,124 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:24,124 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:24,124 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:24,124 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:24,125 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:15:24,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41751,1689711323381 from backup master directory 2023-07-18 20:15:24,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:15:24,129 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,129 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 20:15:24,129 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:24,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/hbase.id with ID: 96cf92bc-5850-4e9f-9d03-7a4e4812ba40 2023-07-18 20:15:24,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:24,155 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,167 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5f5b1a7a to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:24,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@127c9385, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:24,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:24,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 20:15:24,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:24,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store-tmp 2023-07-18 20:15:24,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:24,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 20:15:24,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:24,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:24,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 20:15:24,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:24,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:24,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:24,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/WALs/jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41751%2C1689711323381, suffix=, logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/WALs/jenkins-hbase4.apache.org,41751,1689711323381, archiveDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/oldWALs, maxLogs=10 2023-07-18 20:15:24,214 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK] 2023-07-18 20:15:24,214 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK] 2023-07-18 20:15:24,215 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK] 2023-07-18 20:15:24,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/WALs/jenkins-hbase4.apache.org,41751,1689711323381/jenkins-hbase4.apache.org%2C41751%2C1689711323381.1689711324199 2023-07-18 20:15:24,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK], DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK]] 2023-07-18 20:15:24,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:24,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:24,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:24,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:24,221 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:24,222 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 20:15:24,222 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 20:15:24,223 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:24,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:24,227 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 20:15:24,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:24,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11165771520, jitterRate=0.039893507957458496}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:24,232 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:24,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 20:15:24,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 20:15:24,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 20:15:24,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 20:15:24,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 20:15:24,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 20:15:24,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 20:15:24,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 20:15:24,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 20:15:24,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 20:15:24,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 20:15:24,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 20:15:24,244 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 20:15:24,245 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 20:15:24,246 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 20:15:24,247 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:24,247 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:24,247 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:24,247 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:24,247 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,248 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41751,1689711323381, sessionid=0x1017a132be30000, setting cluster-up flag (Was=false) 2023-07-18 20:15:24,254 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,258 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 20:15:24,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,262 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,267 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 20:15:24,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:24,268 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.hbase-snapshot/.tmp 2023-07-18 20:15:24,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 20:15:24,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 20:15:24,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 20:15:24,271 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:24,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 20:15:24,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 20:15:24,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 20:15:24,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 20:15:24,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 20:15:24,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:24,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689711354287 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 20:15:24,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 20:15:24,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 20:15:24,289 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 20:15:24,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 20:15:24,290 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 20:15:24,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 20:15:24,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 20:15:24,290 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711324290,5,FailOnTimeoutGroup] 2023-07-18 20:15:24,291 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:24,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711324290,5,FailOnTimeoutGroup] 2023-07-18 20:15:24,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 20:15:24,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,313 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:24,314 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:24,314 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499 2023-07-18 20:15:24,316 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(951): ClusterId : 96cf92bc-5850-4e9f-9d03-7a4e4812ba40 2023-07-18 20:15:24,320 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:24,322 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(951): ClusterId : 96cf92bc-5850-4e9f-9d03-7a4e4812ba40 2023-07-18 20:15:24,322 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(951): ClusterId : 96cf92bc-5850-4e9f-9d03-7a4e4812ba40 2023-07-18 20:15:24,322 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:24,325 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:24,325 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:24,325 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:24,327 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:24,327 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:24,327 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:24,327 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:24,328 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:24,329 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ReadOnlyZKClient(139): Connect 0x5648724d to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:24,330 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:24,334 DEBUG [RS:2;jenkins-hbase4:38655] zookeeper.ReadOnlyZKClient(139): Connect 0x4c291667 to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:24,335 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:24,342 DEBUG [RS:1;jenkins-hbase4:33727] zookeeper.ReadOnlyZKClient(139): Connect 0x561f31ec to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:24,352 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:24,353 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@588f86f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:24,353 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5cb0193d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:24,359 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:15:24,360 DEBUG [RS:2;jenkins-hbase4:38655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50741cf6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:24,360 DEBUG [RS:2;jenkins-hbase4:38655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d2f1c58, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:24,361 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/info 2023-07-18 20:15:24,362 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:15:24,362 DEBUG [RS:1;jenkins-hbase4:33727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58d8d55f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:24,362 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,363 DEBUG [RS:1;jenkins-hbase4:33727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ce75a9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:24,363 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:15:24,364 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:24,365 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:15:24,365 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,365 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:15:24,366 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43051 2023-07-18 20:15:24,366 INFO [RS:0;jenkins-hbase4:43051] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:24,366 INFO [RS:0;jenkins-hbase4:43051] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:24,366 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:24,367 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41751,1689711323381 with isa=jenkins-hbase4.apache.org/172.31.14.131:43051, startcode=1689711323555 2023-07-18 20:15:24,367 DEBUG [RS:0;jenkins-hbase4:43051] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:24,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/table 2023-07-18 20:15:24,367 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:15:24,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,368 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35659, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:24,369 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740 2023-07-18 20:15:24,370 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41751] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,376 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:24,377 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 20:15:24,377 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499 2023-07-18 20:15:24,377 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40885 2023-07-18 20:15:24,378 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740 2023-07-18 20:15:24,378 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34953 2023-07-18 20:15:24,381 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:24,381 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,381 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:15:24,381 WARN [RS:0;jenkins-hbase4:43051] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:24,382 INFO [RS:0;jenkins-hbase4:43051] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:24,382 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33727 2023-07-18 20:15:24,382 INFO [RS:1;jenkins-hbase4:33727] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:24,382 INFO [RS:1;jenkins-hbase4:33727] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:24,382 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,382 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:24,382 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41751,1689711323381 with isa=jenkins-hbase4.apache.org/172.31.14.131:33727, startcode=1689711323717 2023-07-18 20:15:24,382 DEBUG [RS:1;jenkins-hbase4:33727] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:24,383 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:15:24,384 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38655 2023-07-18 20:15:24,384 INFO [RS:2;jenkins-hbase4:38655] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:24,384 INFO [RS:2;jenkins-hbase4:38655] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:24,384 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:24,385 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41751,1689711323381 with isa=jenkins-hbase4.apache.org/172.31.14.131:38655, startcode=1689711323882 2023-07-18 20:15:24,385 DEBUG [RS:2;jenkins-hbase4:38655] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:24,394 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41323, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:24,394 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:24,398 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41751] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,398 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:24,398 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60011, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:24,399 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41751] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,399 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499 2023-07-18 20:15:24,395 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43051,1689711323555] 2023-07-18 20:15:24,399 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40885 2023-07-18 20:15:24,403 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34953 2023-07-18 20:15:24,399 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11854552320, jitterRate=0.1040412187576294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:15:24,398 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 20:15:24,403 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:24,403 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:15:24,404 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 20:15:24,403 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499 2023-07-18 20:15:24,404 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:15:24,404 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:15:24,404 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:15:24,404 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:15:24,404 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40885 2023-07-18 20:15:24,404 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:15:24,404 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34953 2023-07-18 20:15:24,404 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:24,404 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:15:24,408 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 20:15:24,408 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 20:15:24,408 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 20:15:24,410 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:24,412 DEBUG [RS:2;jenkins-hbase4:38655] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,413 WARN [RS:2;jenkins-hbase4:38655] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:24,412 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 20:15:24,413 DEBUG [RS:1;jenkins-hbase4:33727] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,413 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33727,1689711323717] 2023-07-18 20:15:24,413 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38655,1689711323882] 2023-07-18 20:15:24,413 WARN [RS:1;jenkins-hbase4:33727] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:24,413 INFO [RS:1;jenkins-hbase4:33727] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:24,413 INFO [RS:2;jenkins-hbase4:38655] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:24,414 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,414 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,414 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 20:15:24,414 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,415 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,415 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,416 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:24,416 INFO [RS:0;jenkins-hbase4:43051] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:24,421 INFO [RS:0;jenkins-hbase4:43051] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:24,427 INFO [RS:0;jenkins-hbase4:43051] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:24,428 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,428 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:24,430 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,430 DEBUG [RS:1;jenkins-hbase4:33727] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:1;jenkins-hbase4:33727] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,431 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,432 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,432 DEBUG [RS:1;jenkins-hbase4:33727] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,432 DEBUG [RS:2;jenkins-hbase4:38655] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,432 DEBUG [RS:2;jenkins-hbase4:38655] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,433 DEBUG [RS:2;jenkins-hbase4:38655] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,433 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:24,433 INFO [RS:1;jenkins-hbase4:33727] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:24,433 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:24,434 INFO [RS:2;jenkins-hbase4:38655] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:24,439 INFO [RS:1;jenkins-hbase4:33727] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:24,442 INFO [RS:2;jenkins-hbase4:38655] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:24,444 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,444 INFO [RS:1;jenkins-hbase4:33727] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:24,444 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,444 INFO [RS:2;jenkins-hbase4:38655] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:24,444 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,444 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,444 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,444 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:24,447 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:24,449 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,450 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,450 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,450 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:1;jenkins-hbase4:33727] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,451 DEBUG [RS:2;jenkins-hbase4:38655] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:24,455 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,455 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,455 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,455 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,456 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,456 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,467 INFO [RS:0;jenkins-hbase4:43051] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:24,467 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43051,1689711323555-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,470 INFO [RS:1;jenkins-hbase4:33727] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:24,470 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33727,1689711323717-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,478 INFO [RS:2;jenkins-hbase4:38655] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:24,478 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38655,1689711323882-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,484 INFO [RS:0;jenkins-hbase4:43051] regionserver.Replication(203): jenkins-hbase4.apache.org,43051,1689711323555 started 2023-07-18 20:15:24,484 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43051,1689711323555, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43051, sessionid=0x1017a132be30001 2023-07-18 20:15:24,484 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:24,484 DEBUG [RS:0;jenkins-hbase4:43051] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,484 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43051,1689711323555' 2023-07-18 20:15:24,484 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:24,485 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:24,485 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:24,485 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:24,485 INFO [RS:1;jenkins-hbase4:33727] regionserver.Replication(203): jenkins-hbase4.apache.org,33727,1689711323717 started 2023-07-18 20:15:24,485 DEBUG [RS:0;jenkins-hbase4:43051] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:24,485 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43051,1689711323555' 2023-07-18 20:15:24,486 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:24,485 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33727,1689711323717, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33727, sessionid=0x1017a132be30002 2023-07-18 20:15:24,486 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:24,486 DEBUG [RS:1;jenkins-hbase4:33727] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,486 DEBUG [RS:1;jenkins-hbase4:33727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33727,1689711323717' 2023-07-18 20:15:24,486 DEBUG [RS:1;jenkins-hbase4:33727] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:24,486 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:24,486 DEBUG [RS:1;jenkins-hbase4:33727] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:24,486 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:24,486 INFO [RS:0;jenkins-hbase4:43051] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:15:24,486 INFO [RS:0;jenkins-hbase4:43051] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:15:24,487 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:24,487 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:24,487 DEBUG [RS:1;jenkins-hbase4:33727] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,487 DEBUG [RS:1;jenkins-hbase4:33727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33727,1689711323717' 2023-07-18 20:15:24,487 DEBUG [RS:1;jenkins-hbase4:33727] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:24,487 DEBUG [RS:1;jenkins-hbase4:33727] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:24,488 DEBUG [RS:1;jenkins-hbase4:33727] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:24,488 INFO [RS:1;jenkins-hbase4:33727] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:15:24,488 INFO [RS:1;jenkins-hbase4:33727] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:15:24,491 INFO [RS:2;jenkins-hbase4:38655] regionserver.Replication(203): jenkins-hbase4.apache.org,38655,1689711323882 started 2023-07-18 20:15:24,491 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38655,1689711323882, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38655, sessionid=0x1017a132be30003 2023-07-18 20:15:24,491 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:24,491 DEBUG [RS:2;jenkins-hbase4:38655] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,491 DEBUG [RS:2;jenkins-hbase4:38655] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38655,1689711323882' 2023-07-18 20:15:24,491 DEBUG [RS:2;jenkins-hbase4:38655] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:24,491 DEBUG [RS:2;jenkins-hbase4:38655] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38655,1689711323882' 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:24,492 DEBUG [RS:2;jenkins-hbase4:38655] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:24,492 INFO [RS:2;jenkins-hbase4:38655] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:15:24,492 INFO [RS:2;jenkins-hbase4:38655] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:15:24,565 DEBUG [jenkins-hbase4:41751] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 20:15:24,565 DEBUG [jenkins-hbase4:41751] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:24,565 DEBUG [jenkins-hbase4:41751] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:24,565 DEBUG [jenkins-hbase4:41751] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:24,565 DEBUG [jenkins-hbase4:41751] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:24,565 DEBUG [jenkins-hbase4:41751] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:24,566 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38655,1689711323882, state=OPENING 2023-07-18 20:15:24,569 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 20:15:24,571 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:24,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38655,1689711323882}] 2023-07-18 20:15:24,571 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:15:24,580 WARN [ReadOnlyZKClient-127.0.0.1:57108@0x5f5b1a7a] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 20:15:24,580 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:24,582 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:24,582 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38655] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:38972 deadline: 1689711384582, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,588 INFO [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43051%2C1689711323555, suffix=, logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,43051,1689711323555, archiveDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs, maxLogs=32 2023-07-18 20:15:24,589 INFO [RS:1;jenkins-hbase4:33727] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33727%2C1689711323717, suffix=, logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,33727,1689711323717, archiveDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs, maxLogs=32 2023-07-18 20:15:24,594 INFO [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38655%2C1689711323882, suffix=, logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,38655,1689711323882, archiveDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs, maxLogs=32 2023-07-18 20:15:24,615 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK] 2023-07-18 20:15:24,616 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK] 2023-07-18 20:15:24,616 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK] 2023-07-18 20:15:24,616 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK] 2023-07-18 20:15:24,616 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK] 2023-07-18 20:15:24,616 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK] 2023-07-18 20:15:24,619 INFO [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,43051,1689711323555/jenkins-hbase4.apache.org%2C43051%2C1689711323555.1689711324589 2023-07-18 20:15:24,619 INFO [RS:1;jenkins-hbase4:33727] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,33727,1689711323717/jenkins-hbase4.apache.org%2C33727%2C1689711323717.1689711324590 2023-07-18 20:15:24,621 DEBUG [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK], DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK], DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK]] 2023-07-18 20:15:24,621 DEBUG [RS:1;jenkins-hbase4:33727] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK], DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK], DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK]] 2023-07-18 20:15:24,628 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK] 2023-07-18 20:15:24,628 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK] 2023-07-18 20:15:24,628 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK] 2023-07-18 20:15:24,631 INFO [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,38655,1689711323882/jenkins-hbase4.apache.org%2C38655%2C1689711323882.1689711324595 2023-07-18 20:15:24,631 DEBUG [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK], DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK]] 2023-07-18 20:15:24,726 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:24,728 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:15:24,730 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:15:24,733 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 20:15:24,733 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:24,735 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38655%2C1689711323882.meta, suffix=.meta, logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,38655,1689711323882, archiveDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs, maxLogs=32 2023-07-18 20:15:24,752 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK] 2023-07-18 20:15:24,753 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK] 2023-07-18 20:15:24,754 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK] 2023-07-18 20:15:24,757 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,38655,1689711323882/jenkins-hbase4.apache.org%2C38655%2C1689711323882.meta.1689711324735.meta 2023-07-18 20:15:24,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK], DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK]] 2023-07-18 20:15:24,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:24,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:15:24,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 20:15:24,758 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 20:15:24,758 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 20:15:24,758 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:24,758 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 20:15:24,758 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 20:15:24,760 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 20:15:24,761 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/info 2023-07-18 20:15:24,761 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/info 2023-07-18 20:15:24,761 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 20:15:24,762 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,762 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 20:15:24,763 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:24,763 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/rep_barrier 2023-07-18 20:15:24,763 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 20:15:24,763 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,764 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 20:15:24,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/table 2023-07-18 20:15:24,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/table 2023-07-18 20:15:24,765 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 20:15:24,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:24,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740 2023-07-18 20:15:24,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740 2023-07-18 20:15:24,768 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 20:15:24,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 20:15:24,771 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11343308960, jitterRate=0.0564279705286026}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 20:15:24,771 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 20:15:24,771 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689711324726 2023-07-18 20:15:24,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 20:15:24,777 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 20:15:24,777 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38655,1689711323882, state=OPEN 2023-07-18 20:15:24,778 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 20:15:24,779 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 20:15:24,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 20:15:24,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38655,1689711323882 in 208 msec 2023-07-18 20:15:24,782 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 20:15:24,782 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 372 msec 2023-07-18 20:15:24,784 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 511 msec 2023-07-18 20:15:24,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689711324785, completionTime=-1 2023-07-18 20:15:24,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 20:15:24,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 20:15:24,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 20:15:24,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689711384793 2023-07-18 20:15:24,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689711444794 2023-07-18 20:15:24,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41751,1689711323381-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41751,1689711323381-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41751,1689711323381-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41751, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 20:15:24,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:24,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 20:15:24,802 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 20:15:24,803 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:24,804 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:24,805 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:24,806 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c empty. 2023-07-18 20:15:24,807 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:24,807 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 20:15:24,823 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:24,824 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5aa0d940e5fa08b87c55fa08bfcc258c, NAME => 'hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp 2023-07-18 20:15:24,835 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 20:15:24,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:24,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5aa0d940e5fa08b87c55fa08bfcc258c, disabling compactions & flushes 2023-07-18 20:15:24,848 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:24,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:24,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. after waiting 0 ms 2023-07-18 20:15:24,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:24,848 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:24,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5aa0d940e5fa08b87c55fa08bfcc258c: 2023-07-18 20:15:24,852 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:24,855 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711324855"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711324855"}]},"ts":"1689711324855"} 2023-07-18 20:15:24,862 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:24,868 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:24,868 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711324868"}]},"ts":"1689711324868"} 2023-07-18 20:15:24,869 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 20:15:24,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:24,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:24,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:24,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:24,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:24,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5aa0d940e5fa08b87c55fa08bfcc258c, ASSIGN}] 2023-07-18 20:15:24,875 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5aa0d940e5fa08b87c55fa08bfcc258c, ASSIGN 2023-07-18 20:15:24,876 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5aa0d940e5fa08b87c55fa08bfcc258c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33727,1689711323717; forceNewPlan=false, retain=false 2023-07-18 20:15:24,890 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:24,896 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 20:15:24,902 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:24,902 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:24,904 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:24,904 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447 empty. 2023-07-18 20:15:24,905 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:24,905 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 20:15:24,925 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:24,926 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8e6a37927d7245565c64643b64c42447, NAME => 'hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp 2023-07-18 20:15:24,943 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:24,943 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 8e6a37927d7245565c64643b64c42447, disabling compactions & flushes 2023-07-18 20:15:24,943 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:24,943 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:24,944 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. after waiting 0 ms 2023-07-18 20:15:24,944 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:24,944 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:24,944 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 8e6a37927d7245565c64643b64c42447: 2023-07-18 20:15:24,950 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:24,955 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711324954"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711324954"}]},"ts":"1689711324954"} 2023-07-18 20:15:24,959 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:24,959 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:24,959 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711324959"}]},"ts":"1689711324959"} 2023-07-18 20:15:24,961 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 20:15:24,964 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:24,964 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:24,964 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:24,964 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:24,964 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:24,964 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8e6a37927d7245565c64643b64c42447, ASSIGN}] 2023-07-18 20:15:24,965 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8e6a37927d7245565c64643b64c42447, ASSIGN 2023-07-18 20:15:24,966 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=8e6a37927d7245565c64643b64c42447, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33727,1689711323717; forceNewPlan=false, retain=false 2023-07-18 20:15:24,966 INFO [jenkins-hbase4:41751] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 20:15:24,968 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5aa0d940e5fa08b87c55fa08bfcc258c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,968 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711324968"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711324968"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711324968"}]},"ts":"1689711324968"} 2023-07-18 20:15:24,968 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8e6a37927d7245565c64643b64c42447, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:24,968 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711324968"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711324968"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711324968"}]},"ts":"1689711324968"} 2023-07-18 20:15:24,969 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 5aa0d940e5fa08b87c55fa08bfcc258c, server=jenkins-hbase4.apache.org,33727,1689711323717}] 2023-07-18 20:15:24,970 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 8e6a37927d7245565c64643b64c42447, server=jenkins-hbase4.apache.org,33727,1689711323717}] 2023-07-18 20:15:25,121 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,121 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 20:15:25,123 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 20:15:25,127 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:25,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5aa0d940e5fa08b87c55fa08bfcc258c, NAME => 'hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:25,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:25,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,129 INFO [StoreOpener-5aa0d940e5fa08b87c55fa08bfcc258c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,130 DEBUG [StoreOpener-5aa0d940e5fa08b87c55fa08bfcc258c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/info 2023-07-18 20:15:25,130 DEBUG [StoreOpener-5aa0d940e5fa08b87c55fa08bfcc258c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/info 2023-07-18 20:15:25,131 INFO [StoreOpener-5aa0d940e5fa08b87c55fa08bfcc258c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5aa0d940e5fa08b87c55fa08bfcc258c columnFamilyName info 2023-07-18 20:15:25,131 INFO [StoreOpener-5aa0d940e5fa08b87c55fa08bfcc258c-1] regionserver.HStore(310): Store=5aa0d940e5fa08b87c55fa08bfcc258c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:25,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:25,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:25,137 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5aa0d940e5fa08b87c55fa08bfcc258c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9412640000, jitterRate=-0.12337958812713623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:25,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5aa0d940e5fa08b87c55fa08bfcc258c: 2023-07-18 20:15:25,138 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c., pid=8, masterSystemTime=1689711325121 2023-07-18 20:15:25,142 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:25,143 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:25,143 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:25,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8e6a37927d7245565c64643b64c42447, NAME => 'hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:25,143 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5aa0d940e5fa08b87c55fa08bfcc258c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 20:15:25,143 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689711325143"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711325143"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711325143"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711325143"}]},"ts":"1689711325143"} 2023-07-18 20:15:25,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. service=MultiRowMutationService 2023-07-18 20:15:25,144 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 20:15:25,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:25,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,145 INFO [StoreOpener-8e6a37927d7245565c64643b64c42447-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,147 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-18 20:15:25,147 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 5aa0d940e5fa08b87c55fa08bfcc258c, server=jenkins-hbase4.apache.org,33727,1689711323717 in 176 msec 2023-07-18 20:15:25,147 DEBUG [StoreOpener-8e6a37927d7245565c64643b64c42447-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/m 2023-07-18 20:15:25,147 DEBUG [StoreOpener-8e6a37927d7245565c64643b64c42447-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/m 2023-07-18 20:15:25,148 INFO [StoreOpener-8e6a37927d7245565c64643b64c42447-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8e6a37927d7245565c64643b64c42447 columnFamilyName m 2023-07-18 20:15:25,148 INFO [StoreOpener-8e6a37927d7245565c64643b64c42447-1] regionserver.HStore(310): Store=8e6a37927d7245565c64643b64c42447/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:25,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 20:15:25,149 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5aa0d940e5fa08b87c55fa08bfcc258c, ASSIGN in 274 msec 2023-07-18 20:15:25,149 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:25,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,149 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711325149"}]},"ts":"1689711325149"} 2023-07-18 20:15:25,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,150 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 20:15:25,153 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:25,153 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:25,154 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 352 msec 2023-07-18 20:15:25,156 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:25,156 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8e6a37927d7245565c64643b64c42447; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5e7000f4, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:25,156 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8e6a37927d7245565c64643b64c42447: 2023-07-18 20:15:25,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447., pid=9, masterSystemTime=1689711325121 2023-07-18 20:15:25,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:25,158 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:25,158 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8e6a37927d7245565c64643b64c42447, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,159 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689711325158"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711325158"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711325158"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711325158"}]},"ts":"1689711325158"} 2023-07-18 20:15:25,162 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 20:15:25,162 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 8e6a37927d7245565c64643b64c42447, server=jenkins-hbase4.apache.org,33727,1689711323717 in 190 msec 2023-07-18 20:15:25,164 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-18 20:15:25,164 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=8e6a37927d7245565c64643b64c42447, ASSIGN in 198 msec 2023-07-18 20:15:25,164 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:25,164 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711325164"}]},"ts":"1689711325164"} 2023-07-18 20:15:25,166 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 20:15:25,169 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:25,170 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 279 msec 2023-07-18 20:15:25,199 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:25,200 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47400, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:25,202 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 20:15:25,203 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 20:15:25,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 20:15:25,205 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:25,205 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:25,211 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:25,211 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:25,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 20:15:25,213 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:15:25,215 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 20:15:25,219 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:25,222 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-18 20:15:25,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 20:15:25,239 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:25,242 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-18 20:15:25,249 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 20:15:25,251 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 20:15:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.122sec 2023-07-18 20:15:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 20:15:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 20:15:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 20:15:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41751,1689711323381-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 20:15:25,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41751,1689711323381-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 20:15:25,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 20:15:25,322 DEBUG [Listener at localhost/43545] zookeeper.ReadOnlyZKClient(139): Connect 0x72676dde to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:25,327 DEBUG [Listener at localhost/43545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34993deb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:25,329 DEBUG [hconnection-0x72c3dca2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:25,331 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:25,332 INFO [Listener at localhost/43545] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:25,333 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:25,335 DEBUG [Listener at localhost/43545] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 20:15:25,336 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58876, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 20:15:25,340 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 20:15:25,340 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:25,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 20:15:25,341 DEBUG [Listener at localhost/43545] zookeeper.ReadOnlyZKClient(139): Connect 0x357edf8c to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:25,346 DEBUG [Listener at localhost/43545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5934234d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:25,346 INFO [Listener at localhost/43545] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57108 2023-07-18 20:15:25,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:25,353 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:25,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:25,355 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017a132be3000a connected 2023-07-18 20:15:25,358 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 20:15:25,375 INFO [Listener at localhost/43545] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 20:15:25,376 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:25,376 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:25,376 INFO [Listener at localhost/43545] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 20:15:25,376 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 20:15:25,376 INFO [Listener at localhost/43545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 20:15:25,376 INFO [Listener at localhost/43545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 20:15:25,377 INFO [Listener at localhost/43545] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35345 2023-07-18 20:15:25,377 INFO [Listener at localhost/43545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 20:15:25,379 DEBUG [Listener at localhost/43545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 20:15:25,379 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:25,380 INFO [Listener at localhost/43545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 20:15:25,381 INFO [Listener at localhost/43545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35345 connecting to ZooKeeper ensemble=127.0.0.1:57108 2023-07-18 20:15:25,385 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:353450x0, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 20:15:25,388 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35345-0x1017a132be3000b connected 2023-07-18 20:15:25,388 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 20:15:25,389 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 20:15:25,390 DEBUG [Listener at localhost/43545] zookeeper.ZKUtil(164): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 20:15:25,390 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35345 2023-07-18 20:15:25,391 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35345 2023-07-18 20:15:25,391 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35345 2023-07-18 20:15:25,391 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35345 2023-07-18 20:15:25,391 DEBUG [Listener at localhost/43545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35345 2023-07-18 20:15:25,393 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 20:15:25,393 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 20:15:25,393 INFO [Listener at localhost/43545] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 20:15:25,394 INFO [Listener at localhost/43545] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 20:15:25,394 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 20:15:25,394 INFO [Listener at localhost/43545] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 20:15:25,394 INFO [Listener at localhost/43545] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 20:15:25,394 INFO [Listener at localhost/43545] http.HttpServer(1146): Jetty bound to port 41773 2023-07-18 20:15:25,394 INFO [Listener at localhost/43545] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 20:15:25,396 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:25,396 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@523335fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,AVAILABLE} 2023-07-18 20:15:25,396 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:25,396 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67ada82a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 20:15:25,510 INFO [Listener at localhost/43545] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 20:15:25,511 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 20:15:25,511 INFO [Listener at localhost/43545] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 20:15:25,511 INFO [Listener at localhost/43545] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 20:15:25,512 INFO [Listener at localhost/43545] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 20:15:25,513 INFO [Listener at localhost/43545] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6b2f8788{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/java.io.tmpdir/jetty-0_0_0_0-41773-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4297412707153718091/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:25,515 INFO [Listener at localhost/43545] server.AbstractConnector(333): Started ServerConnector@45f7964c{HTTP/1.1, (http/1.1)}{0.0.0.0:41773} 2023-07-18 20:15:25,515 INFO [Listener at localhost/43545] server.Server(415): Started @44804ms 2023-07-18 20:15:25,517 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(951): ClusterId : 96cf92bc-5850-4e9f-9d03-7a4e4812ba40 2023-07-18 20:15:25,518 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 20:15:25,519 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 20:15:25,519 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 20:15:25,521 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 20:15:25,522 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ReadOnlyZKClient(139): Connect 0x1d86a1bb to 127.0.0.1:57108 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 20:15:25,526 DEBUG [RS:3;jenkins-hbase4:35345] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13414c7b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 20:15:25,527 DEBUG [RS:3;jenkins-hbase4:35345] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@177ecacb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:25,535 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35345 2023-07-18 20:15:25,535 INFO [RS:3;jenkins-hbase4:35345] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 20:15:25,535 INFO [RS:3;jenkins-hbase4:35345] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 20:15:25,535 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 20:15:25,536 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41751,1689711323381 with isa=jenkins-hbase4.apache.org/172.31.14.131:35345, startcode=1689711325375 2023-07-18 20:15:25,536 DEBUG [RS:3;jenkins-hbase4:35345] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 20:15:25,538 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52757, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 20:15:25,538 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41751] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,538 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 20:15:25,539 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499 2023-07-18 20:15:25,539 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40885 2023-07-18 20:15:25,539 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34953 2023-07-18 20:15:25,545 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:25,545 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:25,545 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:25,545 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:25,545 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:25,545 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,545 WARN [RS:3;jenkins-hbase4:35345] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 20:15:25,545 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 20:15:25,545 INFO [RS:3;jenkins-hbase4:35345] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 20:15:25,546 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35345,1689711325375] 2023-07-18 20:15:25,546 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,546 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,546 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,546 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:25,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:25,550 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 20:15:25,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:25,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:25,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:25,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:25,552 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,552 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,552 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:25,553 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:25,553 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:25,553 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ZKUtil(162): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,554 DEBUG [RS:3;jenkins-hbase4:35345] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 20:15:25,554 INFO [RS:3;jenkins-hbase4:35345] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 20:15:25,555 INFO [RS:3;jenkins-hbase4:35345] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 20:15:25,556 INFO [RS:3;jenkins-hbase4:35345] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 20:15:25,556 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:25,556 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 20:15:25,557 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:25,557 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,557 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,557 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 DEBUG [RS:3;jenkins-hbase4:35345] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 20:15:25,558 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:25,559 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:25,559 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:25,570 INFO [RS:3;jenkins-hbase4:35345] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 20:15:25,570 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35345,1689711325375-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 20:15:25,580 INFO [RS:3;jenkins-hbase4:35345] regionserver.Replication(203): jenkins-hbase4.apache.org,35345,1689711325375 started 2023-07-18 20:15:25,580 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35345,1689711325375, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35345, sessionid=0x1017a132be3000b 2023-07-18 20:15:25,580 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 20:15:25,580 DEBUG [RS:3;jenkins-hbase4:35345] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,580 DEBUG [RS:3;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35345,1689711325375' 2023-07-18 20:15:25,580 DEBUG [RS:3;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 20:15:25,581 DEBUG [RS:3;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 20:15:25,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35345,1689711325375' 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 20:15:25,582 DEBUG [RS:3;jenkins-hbase4:35345] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 20:15:25,582 INFO [RS:3;jenkins-hbase4:35345] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 20:15:25,582 INFO [RS:3;jenkins-hbase4:35345] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 20:15:25,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:25,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:25,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:25,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:25,588 DEBUG [hconnection-0x72210f9b-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:25,589 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38996, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:25,593 DEBUG [hconnection-0x72210f9b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 20:15:25,594 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47404, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 20:15:25,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:25,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:25,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:25,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:25,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58876 deadline: 1689712525598, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:25,599 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:25,600 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:25,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:25,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:25,601 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:25,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:25,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:25,659 INFO [Listener at localhost/43545] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 514) Potentially hanging thread: IPC Server handler 1 on default port 40885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 41877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x72676dde sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499-prefix:jenkins-hbase4.apache.org,38655,1689711323882 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43545 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:40885 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x1d86a1bb-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp643283090-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1980732348-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1816991440-2313-acceptor-0@68ebde25-ServerConnector@795d3fe0{HTTP/1.1, (http/1.1)}{0.0.0.0:45065} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp468268922-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:41751 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp468268922-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2327-acceptor-0@6c8af0cb-ServerConnector@26b927f9{HTTP/1.1, (http/1.1)}{0.0.0.0:46619} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1101141685-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x357edf8c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp643283090-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@6f41ab3a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:33727-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:33727 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37791-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-7aeb7de4-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x357edf8c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data3/current/BP-11363302-172.31.14.131-1689711322671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 41877 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:44781 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x561f31ec-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61189@0x6b7167b7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x72210f9b-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3640407b sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 40885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33727Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:40885 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1101141685-2594 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 40885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:50374 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57108 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61189@0x6b7167b7-SendThread(127.0.0.1:61189) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43545-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 197941035@qtp-292360093-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server idle connection scanner for port 34161 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@79ce1da4[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43545.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x1d86a1bb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1816991440-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43545-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/43545-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711324290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: hconnection-0x2fa3ae88-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38655Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-599547144_17 at /127.0.0.1:50330 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@26442371 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1980732348-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:43051 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:38655-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2fa3ae88-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-433437676_17 at /127.0.0.1:43288 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x5648724d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/43545.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-599547144_17 at /127.0.0.1:59888 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1101141685-2593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x72c3dca2-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 84259926@qtp-62764798-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34011 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Server idle connection scanner for port 40885 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x4c291667-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/43545.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44781 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43051Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:38655 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@46106de1[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data5/current/BP-11363302-172.31.14.131-1689711322671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp643283090-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711324290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61189@0x6b7167b7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x72676dde-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x4c291667 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x5f5b1a7a-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41751,1689711323381 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x4c291667-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6d35c319 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 43545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp643283090-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499-prefix:jenkins-hbase4.apache.org,43051,1689711323555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp468268922-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x5648724d-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@43a47f7f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x5f5b1a7a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_390056931_17 at /127.0.0.1:59912 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-433437676_17 at /127.0.0.1:50350 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data1/current/BP-11363302-172.31.14.131-1689711322671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ProcessThread(sid:0 cport:57108): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp1980732348-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fa3ae88-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:40885 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp643283090-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1980732348-2221 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1101141685-2588-acceptor-0@e59e217-ServerConnector@45f7964c{HTTP/1.1, (http/1.1)}{0.0.0.0:41773} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp468268922-2253-acceptor-0@f10dabc-ServerConnector@e1f8b2a{HTTP/1.1, (http/1.1)}{0.0.0.0:44461} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:44781 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:44781 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:59840 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2fa3ae88-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:40885 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5c80a639 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-63be1b37-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41401,1689711317943 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp1816991440-2312 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-433437676_17 at /127.0.0.1:59896 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1980732348-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x561f31ec-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 1920200792@qtp-292360093-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41709 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1919424606@qtp-166841370-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/37791-SendThread(127.0.0.1:61189) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40885 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x72210f9b-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1101141685-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:59918 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:44781 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x2fa3ae88-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 3 on default port 41877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp468268922-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35345Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:44781 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1816991440-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:44781 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data2/current/BP-11363302-172.31.14.131-1689711322671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1816991440-2319 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 43545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:35345 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 43545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x1d86a1bb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:44781 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43545-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/43545.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d7194cb java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1118791691@qtp-1334609261-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:40885 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2c2d406c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1101141685-2587 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data6/current/BP-11363302-172.31.14.131-1689711322671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:43298 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 40885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:50362 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:40885 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x72676dde-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:40885 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fa3ae88-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_390056931_17 at /127.0.0.1:50354 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp643283090-2282 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-11363302-172.31.14.131-1689711322671:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@21a29c39[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@12f60a17 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1ace3e95-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@67752cc0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_390056931_17 at /127.0.0.1:50402 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x357edf8c-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-599547144_17 at /127.0.0.1:43266 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 880935862@qtp-1334609261-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34237 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1816991440-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1101141685-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43545 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x5648724d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35345-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1101141685-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data4/current/BP-11363302-172.31.14.131-1689711322671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1980732348-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_390056931_17 at /127.0.0.1:43290 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1980732348-2222-acceptor-0@3e458899-ServerConnector@2fb99c35{HTTP/1.1, (http/1.1)}{0.0.0.0:34953} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:44781 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp468268922-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499-prefix:jenkins-hbase4.apache.org,33727,1689711323717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:59920 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:43051-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp468268922-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1816991440-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499-prefix:jenkins-hbase4.apache.org,38655,1689711323882.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1816991440-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2fa3ae88-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:40885 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5210a738 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1980732348-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1705369331_17 at /127.0.0.1:43304 [Receiving block BP-11363302-172.31.14.131-1689711322671:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: CacheReplicationMonitor(1417135045) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@3f69d495 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData-prefix:jenkins-hbase4.apache.org,41751,1689711323381 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 648453475@qtp-62764798-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp468268922-2252 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6e6f944d java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x2fa3ae88-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x561f31ec sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2326 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/992306736.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57108@0x5f5b1a7a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1925277295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@51c36a45 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1877936732) connection to localhost/127.0.0.1:40885 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp643283090-2283-acceptor-0@75b4c9a9-ServerConnector@4bbe91f1{HTTP/1.1, (http/1.1)}{0.0.0.0:40615} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 324249784@qtp-166841370-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp643283090-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43545-SendThread(127.0.0.1:57108) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp679398534-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7db6a9e1-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=825 (was 798) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=430 (was 374) - SystemLoadAverage LEAK? -, ProcessCount=171 (was 171), AvailableMemoryMB=4231 (was 4405) 2023-07-18 20:15:25,662 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-18 20:15:25,679 INFO [Listener at localhost/43545] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=558, OpenFileDescriptor=825, MaxFileDescriptor=60000, SystemLoadAverage=430, ProcessCount=171, AvailableMemoryMB=4229 2023-07-18 20:15:25,679 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=558 is superior to 500 2023-07-18 20:15:25,679 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-18 20:15:25,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:25,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:25,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:25,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:25,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:25,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:25,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:25,684 INFO [RS:3;jenkins-hbase4:35345] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35345%2C1689711325375, suffix=, logDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,35345,1689711325375, archiveDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs, maxLogs=32 2023-07-18 20:15:25,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:25,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:25,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:25,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:25,693 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:25,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:25,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:25,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:25,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:25,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:25,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:25,709 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK] 2023-07-18 20:15:25,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:25,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:25,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:25,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58876 deadline: 1689712525711, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:25,711 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK] 2023-07-18 20:15:25,711 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:25,712 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK] 2023-07-18 20:15:25,713 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:25,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:25,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:25,714 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:25,714 INFO [RS:3;jenkins-hbase4:35345] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/WALs/jenkins-hbase4.apache.org,35345,1689711325375/jenkins-hbase4.apache.org%2C35345%2C1689711325375.1689711325685 2023-07-18 20:15:25,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:25,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:25,715 DEBUG [RS:3;jenkins-hbase4:35345] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40411,DS-82fcc0dc-90fa-40f4-ada8-dc0b05e33d58,DISK], DatanodeInfoWithStorage[127.0.0.1:39419,DS-ab37b49e-352d-4cb5-bb94-0023feeb04f8,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-d0a20b63-2ea1-49cc-bff7-01782339aff3,DISK]] 2023-07-18 20:15:25,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:25,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 20:15:25,718 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:25,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-18 20:15:25,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 20:15:25,720 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:25,721 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:25,721 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:25,722 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 20:15:25,724 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:25,724 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d empty. 2023-07-18 20:15:25,725 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:25,725 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 20:15:25,735 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-18 20:15:25,736 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e0a8480e276fefcda83ce65d73cdfc4d, NAME => 't1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp 2023-07-18 20:15:25,750 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:25,750 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing e0a8480e276fefcda83ce65d73cdfc4d, disabling compactions & flushes 2023-07-18 20:15:25,750 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:25,750 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:25,750 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. after waiting 0 ms 2023-07-18 20:15:25,750 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:25,750 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:25,750 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for e0a8480e276fefcda83ce65d73cdfc4d: 2023-07-18 20:15:25,752 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 20:15:25,753 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711325753"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711325753"}]},"ts":"1689711325753"} 2023-07-18 20:15:25,755 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 20:15:25,756 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 20:15:25,756 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711325756"}]},"ts":"1689711325756"} 2023-07-18 20:15:25,757 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-18 20:15:25,761 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 20:15:25,761 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 20:15:25,761 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 20:15:25,761 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 20:15:25,761 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 20:15:25,761 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 20:15:25,761 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, ASSIGN}] 2023-07-18 20:15:25,762 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, ASSIGN 2023-07-18 20:15:25,763 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38655,1689711323882; forceNewPlan=false, retain=false 2023-07-18 20:15:25,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 20:15:25,913 INFO [jenkins-hbase4:41751] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 20:15:25,915 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e0a8480e276fefcda83ce65d73cdfc4d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:25,915 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711325915"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711325915"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711325915"}]},"ts":"1689711325915"} 2023-07-18 20:15:25,917 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure e0a8480e276fefcda83ce65d73cdfc4d, server=jenkins-hbase4.apache.org,38655,1689711323882}] 2023-07-18 20:15:26,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 20:15:26,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e0a8480e276fefcda83ce65d73cdfc4d, NAME => 't1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 20:15:26,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 20:15:26,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,073 INFO [StoreOpener-e0a8480e276fefcda83ce65d73cdfc4d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,075 DEBUG [StoreOpener-e0a8480e276fefcda83ce65d73cdfc4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/cf1 2023-07-18 20:15:26,075 DEBUG [StoreOpener-e0a8480e276fefcda83ce65d73cdfc4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/cf1 2023-07-18 20:15:26,075 INFO [StoreOpener-e0a8480e276fefcda83ce65d73cdfc4d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e0a8480e276fefcda83ce65d73cdfc4d columnFamilyName cf1 2023-07-18 20:15:26,076 INFO [StoreOpener-e0a8480e276fefcda83ce65d73cdfc4d-1] regionserver.HStore(310): Store=e0a8480e276fefcda83ce65d73cdfc4d/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 20:15:26,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 20:15:26,082 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e0a8480e276fefcda83ce65d73cdfc4d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9628759360, jitterRate=-0.10325190424919128}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 20:15:26,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e0a8480e276fefcda83ce65d73cdfc4d: 2023-07-18 20:15:26,082 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d., pid=14, masterSystemTime=1689711326068 2023-07-18 20:15:26,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,084 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e0a8480e276fefcda83ce65d73cdfc4d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:26,084 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711326084"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689711326084"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689711326084"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689711326084"}]},"ts":"1689711326084"} 2023-07-18 20:15:26,091 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-18 20:15:26,091 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure e0a8480e276fefcda83ce65d73cdfc4d, server=jenkins-hbase4.apache.org,38655,1689711323882 in 172 msec 2023-07-18 20:15:26,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 20:15:26,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, ASSIGN in 330 msec 2023-07-18 20:15:26,093 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 20:15:26,093 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711326093"}]},"ts":"1689711326093"} 2023-07-18 20:15:26,094 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-18 20:15:26,097 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 20:15:26,098 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 381 msec 2023-07-18 20:15:26,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 20:15:26,323 INFO [Listener at localhost/43545] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-18 20:15:26,323 DEBUG [Listener at localhost/43545] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-18 20:15:26,323 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:26,326 INFO [Listener at localhost/43545] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-18 20:15:26,326 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:26,326 INFO [Listener at localhost/43545] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-18 20:15:26,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 20:15:26,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 20:15:26,331 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 20:15:26,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-18 20:15:26,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:58876 deadline: 1689711386327, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-18 20:15:26,333 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:26,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-18 20:15:26,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:26,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:26,435 INFO [Listener at localhost/43545] client.HBaseAdmin$15(890): Started disable of t1 2023-07-18 20:15:26,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-18 20:15:26,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-18 20:15:26,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 20:15:26,439 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711326439"}]},"ts":"1689711326439"} 2023-07-18 20:15:26,440 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-18 20:15:26,441 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-18 20:15:26,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, UNASSIGN}] 2023-07-18 20:15:26,442 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, UNASSIGN 2023-07-18 20:15:26,443 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e0a8480e276fefcda83ce65d73cdfc4d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:26,443 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711326443"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689711326443"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689711326443"}]},"ts":"1689711326443"} 2023-07-18 20:15:26,444 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure e0a8480e276fefcda83ce65d73cdfc4d, server=jenkins-hbase4.apache.org,38655,1689711323882}] 2023-07-18 20:15:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 20:15:26,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e0a8480e276fefcda83ce65d73cdfc4d, disabling compactions & flushes 2023-07-18 20:15:26,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. after waiting 0 ms 2023-07-18 20:15:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 20:15:26,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d. 2023-07-18 20:15:26,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e0a8480e276fefcda83ce65d73cdfc4d: 2023-07-18 20:15:26,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,602 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e0a8480e276fefcda83ce65d73cdfc4d, regionState=CLOSED 2023-07-18 20:15:26,603 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689711326602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689711326602"}]},"ts":"1689711326602"} 2023-07-18 20:15:26,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 20:15:26,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure e0a8480e276fefcda83ce65d73cdfc4d, server=jenkins-hbase4.apache.org,38655,1689711323882 in 160 msec 2023-07-18 20:15:26,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 20:15:26,607 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e0a8480e276fefcda83ce65d73cdfc4d, UNASSIGN in 163 msec 2023-07-18 20:15:26,607 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689711326607"}]},"ts":"1689711326607"} 2023-07-18 20:15:26,608 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-18 20:15:26,611 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-18 20:15:26,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 176 msec 2023-07-18 20:15:26,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 20:15:26,741 INFO [Listener at localhost/43545] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-18 20:15:26,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-18 20:15:26,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-18 20:15:26,744 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 20:15:26,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-18 20:15:26,745 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-18 20:15:26,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:26,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:26,748 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 20:15:26,750 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/cf1, FileablePath, hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/recovered.edits] 2023-07-18 20:15:26,755 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/recovered.edits/4.seqid to hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/archive/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d/recovered.edits/4.seqid 2023-07-18 20:15:26,755 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/.tmp/data/default/t1/e0a8480e276fefcda83ce65d73cdfc4d 2023-07-18 20:15:26,755 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 20:15:26,757 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-18 20:15:26,759 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-18 20:15:26,761 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-18 20:15:26,762 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-18 20:15:26,762 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-18 20:15:26,762 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689711326762"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:26,764 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 20:15:26,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e0a8480e276fefcda83ce65d73cdfc4d, NAME => 't1,,1689711325716.e0a8480e276fefcda83ce65d73cdfc4d.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 20:15:26,764 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-18 20:15:26,764 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689711326764"}]},"ts":"9223372036854775807"} 2023-07-18 20:15:26,765 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-18 20:15:26,767 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 20:15:26,768 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-18 20:15:26,825 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 20:15:26,825 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 20:15:26,826 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:26,826 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 20:15:26,826 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 20:15:26,826 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 20:15:26,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 20:15:26,850 INFO [Listener at localhost/43545] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-18 20:15:26,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:26,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:26,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:26,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:26,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:26,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:26,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:26,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:26,869 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:26,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:26,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:26,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:26,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:26,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:26,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:26,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58876 deadline: 1689712526879, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:26,880 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:26,883 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:26,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,884 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:26,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:26,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:26,903 INFO [Listener at localhost/43545] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=570 (was 558) - Thread LEAK? -, OpenFileDescriptor=835 (was 825) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=430 (was 430), ProcessCount=171 (was 171), AvailableMemoryMB=4223 (was 4229) 2023-07-18 20:15:26,903 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-18 20:15:26,922 INFO [Listener at localhost/43545] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=430, ProcessCount=171, AvailableMemoryMB=4223 2023-07-18 20:15:26,923 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-18 20:15:26,923 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-18 20:15:26,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:26,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:26,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:26,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:26,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:26,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:26,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:26,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:26,939 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:26,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:26,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:26,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:26,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:26,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:26,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:26,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58876 deadline: 1689712526949, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:26,949 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:26,951 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:26,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,952 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:26,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:26,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:26,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 20:15:26,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:26,955 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-18 20:15:26,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 20:15:26,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 20:15:26,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:26,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:26,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:26,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:26,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:26,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:26,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:26,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:26,972 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:26,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:26,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:26,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:26,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:26,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:26,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:26,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:26,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58876 deadline: 1689712526985, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:26,985 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:26,987 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:26,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:26,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:26,988 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:26,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:26,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:27,005 INFO [Listener at localhost/43545] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 570) - Thread LEAK? -, OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=430 (was 430), ProcessCount=171 (was 171), AvailableMemoryMB=4222 (was 4223) 2023-07-18 20:15:27,005 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-18 20:15:27,022 INFO [Listener at localhost/43545] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=430, ProcessCount=171, AvailableMemoryMB=4222 2023-07-18 20:15:27,022 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-18 20:15:27,022 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-18 20:15:27,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:27,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:27,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:27,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:27,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:27,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:27,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:27,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:27,035 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:27,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:27,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:27,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:27,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:27,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:27,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58876 deadline: 1689712527046, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:27,047 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:27,049 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:27,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,049 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:27,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:27,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:27,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:27,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:27,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:27,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:27,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:27,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:27,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:27,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:27,065 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:27,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:27,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:27,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:27,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:27,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:27,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58876 deadline: 1689712527074, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:27,075 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:27,076 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:27,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,077 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:27,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:27,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:27,095 INFO [Listener at localhost/43545] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=430 (was 430), ProcessCount=171 (was 171), AvailableMemoryMB=4222 (was 4222) 2023-07-18 20:15:27,095 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 20:15:27,111 INFO [Listener at localhost/43545] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=430, ProcessCount=171, AvailableMemoryMB=4222 2023-07-18 20:15:27,111 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 20:15:27,111 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-18 20:15:27,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:27,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:27,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:27,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:27,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:27,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:27,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:27,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:27,125 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:27,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:27,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:27,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:27,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:27,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:27,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58876 deadline: 1689712527133, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:27,133 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:27,135 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,136 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:27,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:27,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:27,136 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-18 20:15:27,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-18 20:15:27,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 20:15:27,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 20:15:27,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:27,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 20:15:27,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 20:15:27,153 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:27,155 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-18 20:15:27,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 20:15:27,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 20:15:27,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:27,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:58876 deadline: 1689712527251, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-18 20:15:27,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 20:15:27,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 20:15:27,275 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 20:15:27,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-18 20:15:27,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 20:15:27,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-18 20:15:27,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 20:15:27,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 20:15:27,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 20:15:27,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:27,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-18 20:15:27,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,389 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,391 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 20:15:27,392 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,393 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 20:15:27,394 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 20:15:27,394 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,396 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 20:15:27,397 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-18 20:15:27,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 20:15:27,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 20:15:27,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 20:15:27,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 20:15:27,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:27,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:27,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:58876 deadline: 1689711387502, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-18 20:15:27,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:27,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:27,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:27,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:27,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:27,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-18 20:15:27,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 20:15:27,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:27,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 20:15:27,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 20:15:27,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 20:15:27,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 20:15:27,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 20:15:27,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 20:15:27,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 20:15:27,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 20:15:27,521 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 20:15:27,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 20:15:27,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 20:15:27,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 20:15:27,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 20:15:27,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 20:15:27,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41751] to rsgroup master 2023-07-18 20:15:27,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 20:15:27,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58876 deadline: 1689712527529, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. 2023-07-18 20:15:27,530 WARN [Listener at localhost/43545] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41751 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 20:15:27,532 INFO [Listener at localhost/43545] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 20:15:27,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 20:15:27,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 20:15:27,533 INFO [Listener at localhost/43545] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33727, jenkins-hbase4.apache.org:35345, jenkins-hbase4.apache.org:38655, jenkins-hbase4.apache.org:43051], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 20:15:27,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 20:15:27,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41751] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 20:15:27,553 INFO [Listener at localhost/43545] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=430 (was 430), ProcessCount=171 (was 171), AvailableMemoryMB=4220 (was 4222) 2023-07-18 20:15:27,553 WARN [Listener at localhost/43545] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 20:15:27,553 INFO [Listener at localhost/43545] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 20:15:27,553 INFO [Listener at localhost/43545] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 20:15:27,553 DEBUG [Listener at localhost/43545] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x72676dde to 127.0.0.1:57108 2023-07-18 20:15:27,553 DEBUG [Listener at localhost/43545] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,553 DEBUG [Listener at localhost/43545] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 20:15:27,554 DEBUG [Listener at localhost/43545] util.JVMClusterUtil(257): Found active master hash=1214557593, stopped=false 2023-07-18 20:15:27,554 DEBUG [Listener at localhost/43545] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 20:15:27,554 DEBUG [Listener at localhost/43545] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 20:15:27,554 INFO [Listener at localhost/43545] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:27,556 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:27,556 INFO [Listener at localhost/43545] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 20:15:27,556 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:27,556 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:27,556 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:27,556 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:27,556 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 20:15:27,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:27,557 DEBUG [Listener at localhost/43545] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f5b1a7a to 127.0.0.1:57108 2023-07-18 20:15:27,557 DEBUG [Listener at localhost/43545] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:27,558 INFO [Listener at localhost/43545] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43051,1689711323555' ***** 2023-07-18 20:15:27,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:27,558 INFO [Listener at localhost/43545] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:27,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:27,558 INFO [Listener at localhost/43545] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33727,1689711323717' ***** 2023-07-18 20:15:27,558 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:27,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 20:15:27,558 INFO [Listener at localhost/43545] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:27,559 INFO [Listener at localhost/43545] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38655,1689711323882' ***** 2023-07-18 20:15:27,559 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:27,559 INFO [Listener at localhost/43545] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:27,559 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:27,562 INFO [Listener at localhost/43545] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35345,1689711325375' ***** 2023-07-18 20:15:27,562 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:27,562 INFO [Listener at localhost/43545] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 20:15:27,562 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,563 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:27,566 INFO [RS:1;jenkins-hbase4:33727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1ed60418{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:27,566 INFO [RS:0;jenkins-hbase4:43051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b5c9b44{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:27,567 INFO [RS:3;jenkins-hbase4:35345] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6b2f8788{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:27,567 INFO [RS:1;jenkins-hbase4:33727] server.AbstractConnector(383): Stopped ServerConnector@4bbe91f1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:27,567 INFO [RS:2;jenkins-hbase4:38655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4981e3b1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 20:15:27,567 INFO [RS:1;jenkins-hbase4:33727] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:27,568 INFO [RS:0;jenkins-hbase4:43051] server.AbstractConnector(383): Stopped ServerConnector@e1f8b2a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:27,568 INFO [RS:3;jenkins-hbase4:35345] server.AbstractConnector(383): Stopped ServerConnector@45f7964c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:27,568 INFO [RS:0;jenkins-hbase4:43051] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:27,568 INFO [RS:2;jenkins-hbase4:38655] server.AbstractConnector(383): Stopped ServerConnector@795d3fe0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:27,568 INFO [RS:3;jenkins-hbase4:35345] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:27,569 INFO [RS:2;jenkins-hbase4:38655] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:27,568 INFO [RS:1;jenkins-hbase4:33727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35af6de7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:27,568 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,570 INFO [RS:3;jenkins-hbase4:35345] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67ada82a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:27,569 INFO [RS:0;jenkins-hbase4:43051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2034b22a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:27,571 INFO [RS:1;jenkins-hbase4:33727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72be832f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:27,572 INFO [RS:0;jenkins-hbase4:43051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d3b5daa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:27,570 INFO [RS:2;jenkins-hbase4:38655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3ed2d6d6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:27,572 INFO [RS:3;jenkins-hbase4:35345] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@523335fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:27,574 INFO [RS:2;jenkins-hbase4:38655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41854f52{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:27,574 INFO [RS:1;jenkins-hbase4:33727] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:27,574 INFO [RS:1;jenkins-hbase4:33727] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:27,574 INFO [RS:1;jenkins-hbase4:33727] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:27,574 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(3305): Received CLOSE for 5aa0d940e5fa08b87c55fa08bfcc258c 2023-07-18 20:15:27,575 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(3305): Received CLOSE for 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:27,575 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:27,575 DEBUG [RS:1;jenkins-hbase4:33727] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x561f31ec to 127.0.0.1:57108 2023-07-18 20:15:27,575 DEBUG [RS:1;jenkins-hbase4:33727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,575 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 20:15:27,575 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1478): Online Regions={5aa0d940e5fa08b87c55fa08bfcc258c=hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c., 8e6a37927d7245565c64643b64c42447=hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447.} 2023-07-18 20:15:27,575 DEBUG [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1504): Waiting on 5aa0d940e5fa08b87c55fa08bfcc258c, 8e6a37927d7245565c64643b64c42447 2023-07-18 20:15:27,575 INFO [RS:0;jenkins-hbase4:43051] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:27,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5aa0d940e5fa08b87c55fa08bfcc258c, disabling compactions & flushes 2023-07-18 20:15:27,575 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:27,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:27,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:27,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. after waiting 0 ms 2023-07-18 20:15:27,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:27,576 INFO [RS:2;jenkins-hbase4:38655] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:27,576 INFO [RS:2;jenkins-hbase4:38655] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:27,576 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:27,576 INFO [RS:3;jenkins-hbase4:35345] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 20:15:27,576 INFO [RS:0;jenkins-hbase4:43051] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:27,576 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 20:15:27,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5aa0d940e5fa08b87c55fa08bfcc258c 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-18 20:15:27,576 INFO [RS:2;jenkins-hbase4:38655] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:27,576 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:27,576 INFO [RS:0;jenkins-hbase4:43051] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:27,576 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:27,576 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5648724d to 127.0.0.1:57108 2023-07-18 20:15:27,576 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,576 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43051,1689711323555; all regions closed. 2023-07-18 20:15:27,576 INFO [RS:3;jenkins-hbase4:35345] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 20:15:27,577 INFO [RS:3;jenkins-hbase4:35345] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 20:15:27,577 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:27,577 DEBUG [RS:3;jenkins-hbase4:35345] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1d86a1bb to 127.0.0.1:57108 2023-07-18 20:15:27,576 DEBUG [RS:2;jenkins-hbase4:38655] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4c291667 to 127.0.0.1:57108 2023-07-18 20:15:27,577 DEBUG [RS:3;jenkins-hbase4:35345] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,577 DEBUG [RS:2;jenkins-hbase4:38655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,577 INFO [RS:2;jenkins-hbase4:38655] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:27,577 INFO [RS:2;jenkins-hbase4:38655] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:27,577 INFO [RS:2;jenkins-hbase4:38655] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:27,577 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35345,1689711325375; all regions closed. 2023-07-18 20:15:27,577 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 20:15:27,578 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 20:15:27,578 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-18 20:15:27,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 20:15:27,578 DEBUG [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-18 20:15:27,578 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 20:15:27,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 20:15:27,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 20:15:27,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 20:15:27,578 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-18 20:15:27,589 DEBUG [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs 2023-07-18 20:15:27,589 INFO [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43051%2C1689711323555:(num 1689711324589) 2023-07-18 20:15:27,589 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,589 INFO [RS:0;jenkins-hbase4:43051] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,590 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:27,594 INFO [RS:0;jenkins-hbase4:43051] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:27,594 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:27,595 INFO [RS:0;jenkins-hbase4:43051] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:27,595 INFO [RS:0;jenkins-hbase4:43051] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:27,596 DEBUG [RS:3;jenkins-hbase4:35345] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs 2023-07-18 20:15:27,596 INFO [RS:3;jenkins-hbase4:35345] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35345%2C1689711325375:(num 1689711325685) 2023-07-18 20:15:27,596 DEBUG [RS:3;jenkins-hbase4:35345] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,596 INFO [RS:3;jenkins-hbase4:35345] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,599 INFO [RS:3;jenkins-hbase4:35345] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:27,599 INFO [RS:3;jenkins-hbase4:35345] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:27,599 INFO [RS:3;jenkins-hbase4:35345] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:27,599 INFO [RS:3;jenkins-hbase4:35345] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:27,599 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:27,604 INFO [RS:3;jenkins-hbase4:35345] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35345 2023-07-18 20:15:27,608 INFO [RS:0;jenkins-hbase4:43051] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43051 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689711323555 2023-07-18 20:15:27,610 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,611 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:27,611 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:27,611 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:27,611 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35345,1689711325375 2023-07-18 20:15:27,612 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43051,1689711323555] 2023-07-18 20:15:27,612 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43051,1689711323555; numProcessing=1 2023-07-18 20:15:27,614 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43051,1689711323555 already deleted, retry=false 2023-07-18 20:15:27,614 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43051,1689711323555 expired; onlineServers=3 2023-07-18 20:15:27,614 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35345,1689711325375] 2023-07-18 20:15:27,614 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35345,1689711325375; numProcessing=2 2023-07-18 20:15:27,615 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35345,1689711325375 already deleted, retry=false 2023-07-18 20:15:27,615 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35345,1689711325375 expired; onlineServers=2 2023-07-18 20:15:27,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/.tmp/info/03578ac1647c403cbc93b90186b6c076 2023-07-18 20:15:27,621 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/.tmp/info/5b69d4fed10043b2b244fae03ccecab5 2023-07-18 20:15:27,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 03578ac1647c403cbc93b90186b6c076 2023-07-18 20:15:27,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/.tmp/info/03578ac1647c403cbc93b90186b6c076 as hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/info/03578ac1647c403cbc93b90186b6c076 2023-07-18 20:15:27,626 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5b69d4fed10043b2b244fae03ccecab5 2023-07-18 20:15:27,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 03578ac1647c403cbc93b90186b6c076 2023-07-18 20:15:27,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/info/03578ac1647c403cbc93b90186b6c076, entries=3, sequenceid=9, filesize=5.0 K 2023-07-18 20:15:27,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 5aa0d940e5fa08b87c55fa08bfcc258c in 55ms, sequenceid=9, compaction requested=false 2023-07-18 20:15:27,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/namespace/5aa0d940e5fa08b87c55fa08bfcc258c/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 20:15:27,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/.tmp/rep_barrier/91401b96e48349f291a027471ce28efb 2023-07-18 20:15:27,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5aa0d940e5fa08b87c55fa08bfcc258c: 2023-07-18 20:15:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689711324801.5aa0d940e5fa08b87c55fa08bfcc258c. 2023-07-18 20:15:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8e6a37927d7245565c64643b64c42447, disabling compactions & flushes 2023-07-18 20:15:27,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. after waiting 0 ms 2023-07-18 20:15:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:27,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8e6a37927d7245565c64643b64c42447 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-18 20:15:27,645 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91401b96e48349f291a027471ce28efb 2023-07-18 20:15:27,651 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/.tmp/m/baa21a9fbf594ddfb5cf2e108b4cf99e 2023-07-18 20:15:27,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for baa21a9fbf594ddfb5cf2e108b4cf99e 2023-07-18 20:15:27,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/.tmp/table/275fe318b07241f58abe8c6c239cfbc7 2023-07-18 20:15:27,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/.tmp/m/baa21a9fbf594ddfb5cf2e108b4cf99e as hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/m/baa21a9fbf594ddfb5cf2e108b4cf99e 2023-07-18 20:15:27,662 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,663 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 275fe318b07241f58abe8c6c239cfbc7 2023-07-18 20:15:27,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/.tmp/info/5b69d4fed10043b2b244fae03ccecab5 as hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/info/5b69d4fed10043b2b244fae03ccecab5 2023-07-18 20:15:27,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for baa21a9fbf594ddfb5cf2e108b4cf99e 2023-07-18 20:15:27,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/m/baa21a9fbf594ddfb5cf2e108b4cf99e, entries=12, sequenceid=29, filesize=5.4 K 2023-07-18 20:15:27,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 8e6a37927d7245565c64643b64c42447 in 25ms, sequenceid=29, compaction requested=false 2023-07-18 20:15:27,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5b69d4fed10043b2b244fae03ccecab5 2023-07-18 20:15:27,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/rsgroup/8e6a37927d7245565c64643b64c42447/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-18 20:15:27,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/info/5b69d4fed10043b2b244fae03ccecab5, entries=22, sequenceid=26, filesize=7.3 K 2023-07-18 20:15:27,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:27,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:27,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8e6a37927d7245565c64643b64c42447: 2023-07-18 20:15:27,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689711324889.8e6a37927d7245565c64643b64c42447. 2023-07-18 20:15:27,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/.tmp/rep_barrier/91401b96e48349f291a027471ce28efb as hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/rep_barrier/91401b96e48349f291a027471ce28efb 2023-07-18 20:15:27,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91401b96e48349f291a027471ce28efb 2023-07-18 20:15:27,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/rep_barrier/91401b96e48349f291a027471ce28efb, entries=1, sequenceid=26, filesize=4.9 K 2023-07-18 20:15:27,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/.tmp/table/275fe318b07241f58abe8c6c239cfbc7 as hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/table/275fe318b07241f58abe8c6c239cfbc7 2023-07-18 20:15:27,685 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 275fe318b07241f58abe8c6c239cfbc7 2023-07-18 20:15:27,685 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/table/275fe318b07241f58abe8c6c239cfbc7, entries=6, sequenceid=26, filesize=5.1 K 2023-07-18 20:15:27,686 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 108ms, sequenceid=26, compaction requested=false 2023-07-18 20:15:27,695 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 20:15:27,695 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 20:15:27,696 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:27,696 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 20:15:27,696 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 20:15:27,756 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:27,756 INFO [RS:3;jenkins-hbase4:35345] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35345,1689711325375; zookeeper connection closed. 2023-07-18 20:15:27,756 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:35345-0x1017a132be3000b, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:27,756 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@22d53011] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@22d53011 2023-07-18 20:15:27,775 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33727,1689711323717; all regions closed. 2023-07-18 20:15:27,778 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38655,1689711323882; all regions closed. 2023-07-18 20:15:27,781 DEBUG [RS:1;jenkins-hbase4:33727] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs 2023-07-18 20:15:27,781 INFO [RS:1;jenkins-hbase4:33727] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33727%2C1689711323717:(num 1689711324590) 2023-07-18 20:15:27,781 DEBUG [RS:1;jenkins-hbase4:33727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,781 INFO [RS:1;jenkins-hbase4:33727] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,781 INFO [RS:1;jenkins-hbase4:33727] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:27,781 INFO [RS:1;jenkins-hbase4:33727] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 20:15:27,781 INFO [RS:1;jenkins-hbase4:33727] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 20:15:27,781 INFO [RS:1;jenkins-hbase4:33727] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 20:15:27,782 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:27,782 INFO [RS:1;jenkins-hbase4:33727] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33727 2023-07-18 20:15:27,784 DEBUG [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs 2023-07-18 20:15:27,784 INFO [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38655%2C1689711323882.meta:.meta(num 1689711324735) 2023-07-18 20:15:27,784 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,784 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:27,784 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33727,1689711323717 2023-07-18 20:15:27,785 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33727,1689711323717] 2023-07-18 20:15:27,785 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33727,1689711323717; numProcessing=3 2023-07-18 20:15:27,788 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33727,1689711323717 already deleted, retry=false 2023-07-18 20:15:27,788 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33727,1689711323717 expired; onlineServers=1 2023-07-18 20:15:27,789 DEBUG [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/oldWALs 2023-07-18 20:15:27,789 INFO [RS:2;jenkins-hbase4:38655] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38655%2C1689711323882:(num 1689711324595) 2023-07-18 20:15:27,789 DEBUG [RS:2;jenkins-hbase4:38655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,789 INFO [RS:2;jenkins-hbase4:38655] regionserver.LeaseManager(133): Closed leases 2023-07-18 20:15:27,790 INFO [RS:2;jenkins-hbase4:38655] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 20:15:27,790 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:27,791 INFO [RS:2;jenkins-hbase4:38655] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38655 2023-07-18 20:15:27,792 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38655,1689711323882 2023-07-18 20:15:27,792 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 20:15:27,793 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38655,1689711323882] 2023-07-18 20:15:27,794 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38655,1689711323882; numProcessing=4 2023-07-18 20:15:27,795 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38655,1689711323882 already deleted, retry=false 2023-07-18 20:15:27,795 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38655,1689711323882 expired; onlineServers=0 2023-07-18 20:15:27,795 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41751,1689711323381' ***** 2023-07-18 20:15:27,795 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 20:15:27,796 DEBUG [M:0;jenkins-hbase4:41751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c5b304d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 20:15:27,796 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 20:15:27,798 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 20:15:27,798 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 20:15:27,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 20:15:27,799 INFO [M:0;jenkins-hbase4:41751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a4e5625{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 20:15:27,799 INFO [M:0;jenkins-hbase4:41751] server.AbstractConnector(383): Stopped ServerConnector@2fb99c35{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:27,799 INFO [M:0;jenkins-hbase4:41751] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 20:15:27,800 INFO [M:0;jenkins-hbase4:41751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f0d357d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 20:15:27,800 INFO [M:0;jenkins-hbase4:41751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@27ece686{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/hadoop.log.dir/,STOPPED} 2023-07-18 20:15:27,801 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41751,1689711323381 2023-07-18 20:15:27,801 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41751,1689711323381; all regions closed. 2023-07-18 20:15:27,801 DEBUG [M:0;jenkins-hbase4:41751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 20:15:27,801 INFO [M:0;jenkins-hbase4:41751] master.HMaster(1491): Stopping master jetty server 2023-07-18 20:15:27,801 INFO [M:0;jenkins-hbase4:41751] server.AbstractConnector(383): Stopped ServerConnector@26b927f9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 20:15:27,802 DEBUG [M:0;jenkins-hbase4:41751] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 20:15:27,802 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 20:15:27,802 DEBUG [M:0;jenkins-hbase4:41751] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 20:15:27,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711324290] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689711324290,5,FailOnTimeoutGroup] 2023-07-18 20:15:27,802 INFO [M:0;jenkins-hbase4:41751] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 20:15:27,802 INFO [M:0;jenkins-hbase4:41751] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 20:15:27,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711324290] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689711324290,5,FailOnTimeoutGroup] 2023-07-18 20:15:27,802 INFO [M:0;jenkins-hbase4:41751] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 20:15:27,802 DEBUG [M:0;jenkins-hbase4:41751] master.HMaster(1512): Stopping service threads 2023-07-18 20:15:27,802 INFO [M:0;jenkins-hbase4:41751] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 20:15:27,803 ERROR [M:0;jenkins-hbase4:41751] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 20:15:27,803 INFO [M:0;jenkins-hbase4:41751] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 20:15:27,803 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 20:15:27,803 DEBUG [M:0;jenkins-hbase4:41751] zookeeper.ZKUtil(398): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 20:15:27,803 WARN [M:0;jenkins-hbase4:41751] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 20:15:27,803 INFO [M:0;jenkins-hbase4:41751] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 20:15:27,803 INFO [M:0;jenkins-hbase4:41751] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 20:15:27,803 DEBUG [M:0;jenkins-hbase4:41751] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 20:15:27,803 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:27,803 DEBUG [M:0;jenkins-hbase4:41751] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:27,803 DEBUG [M:0;jenkins-hbase4:41751] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 20:15:27,803 DEBUG [M:0;jenkins-hbase4:41751] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:27,804 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-18 20:15:27,814 INFO [M:0;jenkins-hbase4:41751] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e03ca2f033ae4660a641e4cb29bccef5 2023-07-18 20:15:27,819 DEBUG [M:0;jenkins-hbase4:41751] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e03ca2f033ae4660a641e4cb29bccef5 as hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e03ca2f033ae4660a641e4cb29bccef5 2023-07-18 20:15:27,823 INFO [M:0;jenkins-hbase4:41751] regionserver.HStore(1080): Added hdfs://localhost:40885/user/jenkins/test-data/c51f2999-17ef-56d5-1c5a-78542659e499/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e03ca2f033ae4660a641e4cb29bccef5, entries=22, sequenceid=175, filesize=11.1 K 2023-07-18 20:15:27,824 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78038, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-18 20:15:27,826 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 20:15:27,826 DEBUG [M:0;jenkins-hbase4:41751] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 20:15:27,831 INFO [M:0;jenkins-hbase4:41751] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 20:15:27,831 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 20:15:27,832 INFO [M:0;jenkins-hbase4:41751] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41751 2023-07-18 20:15:27,833 DEBUG [M:0;jenkins-hbase4:41751] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41751,1689711323381 already deleted, retry=false 2023-07-18 20:15:27,856 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:27,856 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43051,1689711323555; zookeeper connection closed. 2023-07-18 20:15:27,856 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1017a132be30001, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:27,856 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@642b672a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@642b672a 2023-07-18 20:15:28,458 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:28,458 INFO [M:0;jenkins-hbase4:41751] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41751,1689711323381; zookeeper connection closed. 2023-07-18 20:15:28,458 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): master:41751-0x1017a132be30000, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:28,558 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:28,558 INFO [RS:2;jenkins-hbase4:38655] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38655,1689711323882; zookeeper connection closed. 2023-07-18 20:15:28,558 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:38655-0x1017a132be30003, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:28,559 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d262ed] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d262ed 2023-07-18 20:15:28,658 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:28,658 INFO [RS:1;jenkins-hbase4:33727] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33727,1689711323717; zookeeper connection closed. 2023-07-18 20:15:28,658 DEBUG [Listener at localhost/43545-EventThread] zookeeper.ZKWatcher(600): regionserver:33727-0x1017a132be30002, quorum=127.0.0.1:57108, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 20:15:28,659 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@29ba861f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@29ba861f 2023-07-18 20:15:28,659 INFO [Listener at localhost/43545] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 20:15:28,659 WARN [Listener at localhost/43545] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:28,662 INFO [Listener at localhost/43545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:28,765 WARN [BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:28,765 WARN [BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-11363302-172.31.14.131-1689711322671 (Datanode Uuid 5d207432-a59d-4781-a846-b4ceb46f6f3b) service to localhost/127.0.0.1:40885 2023-07-18 20:15:28,766 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data5/current/BP-11363302-172.31.14.131-1689711322671] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:28,766 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data6/current/BP-11363302-172.31.14.131-1689711322671] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:28,767 WARN [Listener at localhost/43545] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:28,770 INFO [Listener at localhost/43545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:28,872 WARN [BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:28,872 WARN [BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-11363302-172.31.14.131-1689711322671 (Datanode Uuid 9906c981-572b-4687-b3ee-323f61361fcc) service to localhost/127.0.0.1:40885 2023-07-18 20:15:28,873 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data3/current/BP-11363302-172.31.14.131-1689711322671] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:28,874 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data4/current/BP-11363302-172.31.14.131-1689711322671] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:28,875 WARN [Listener at localhost/43545] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 20:15:28,877 INFO [Listener at localhost/43545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:28,979 WARN [BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 20:15:28,979 WARN [BP-11363302-172.31.14.131-1689711322671 heartbeating to localhost/127.0.0.1:40885] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-11363302-172.31.14.131-1689711322671 (Datanode Uuid 42e721a5-9bc2-410c-b8a1-978b25a19f92) service to localhost/127.0.0.1:40885 2023-07-18 20:15:28,980 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data1/current/BP-11363302-172.31.14.131-1689711322671] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:28,981 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3c47ed02-c71f-b7b7-f265-5246d5bb77a5/cluster_1e8706d7-346f-a544-b912-33df0f97caff/dfs/data/data2/current/BP-11363302-172.31.14.131-1689711322671] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 20:15:28,991 INFO [Listener at localhost/43545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 20:15:29,105 INFO [Listener at localhost/43545] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 20:15:29,130 INFO [Listener at localhost/43545] hbase.HBaseTestingUtility(1293): Minicluster is down