2023-07-13 22:15:34,137 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad 2023-07-13 22:15:34,154 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-13 22:15:34,173 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 22:15:34,174 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3, deleteOnExit=true 2023-07-13 22:15:34,174 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 22:15:34,174 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/test.cache.data in system properties and HBase conf 2023-07-13 22:15:34,175 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 22:15:34,175 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir in system properties and HBase conf 2023-07-13 22:15:34,176 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 22:15:34,176 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 22:15:34,177 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 22:15:34,295 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-13 22:15:34,721 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 22:15:34,727 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 22:15:34,728 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 22:15:34,729 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 22:15:34,730 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 22:15:34,730 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 22:15:34,731 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 22:15:34,731 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 22:15:34,732 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 22:15:34,732 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 22:15:34,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/nfs.dump.dir in system properties and HBase conf 2023-07-13 22:15:34,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir in system properties and HBase conf 2023-07-13 22:15:34,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 22:15:34,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 22:15:34,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 22:15:35,331 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 22:15:35,335 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 22:15:35,625 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-13 22:15:35,801 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-13 22:15:35,814 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:15:35,845 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:15:35,875 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/Jetty_localhost_33457_hdfs____.4fk82o/webapp 2023-07-13 22:15:36,002 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33457 2023-07-13 22:15:36,040 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 22:15:36,041 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 22:15:36,444 WARN [Listener at localhost/42191] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:15:36,509 WARN [Listener at localhost/42191] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:15:36,533 WARN [Listener at localhost/42191] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:15:36,540 INFO [Listener at localhost/42191] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:15:36,555 INFO [Listener at localhost/42191] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/Jetty_localhost_44143_datanode____rbtk9j/webapp 2023-07-13 22:15:36,688 INFO [Listener at localhost/42191] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44143 2023-07-13 22:15:37,205 WARN [Listener at localhost/45649] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:15:37,229 WARN [Listener at localhost/45649] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:15:37,233 WARN [Listener at localhost/45649] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:15:37,236 INFO [Listener at localhost/45649] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:15:37,247 INFO [Listener at localhost/45649] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/Jetty_localhost_45717_datanode____w38dwj/webapp 2023-07-13 22:15:37,352 INFO [Listener at localhost/45649] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45717 2023-07-13 22:15:37,360 WARN [Listener at localhost/36819] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:15:37,370 WARN [Listener at localhost/36819] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:15:37,373 WARN [Listener at localhost/36819] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:15:37,374 INFO [Listener at localhost/36819] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:15:37,378 INFO [Listener at localhost/36819] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/Jetty_localhost_39229_datanode____.oq97eu/webapp 2023-07-13 22:15:37,497 INFO [Listener at localhost/36819] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39229 2023-07-13 22:15:37,511 WARN [Listener at localhost/39613] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:15:37,997 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3ac6bf280f2c2150: Processing first storage report for DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4 from datanode 082a6246-a47b-43dc-8198-7d1fbd11fc69 2023-07-13 22:15:37,998 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3ac6bf280f2c2150: from storage DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4 node DatanodeRegistration(127.0.0.1:35707, datanodeUuid=082a6246-a47b-43dc-8198-7d1fbd11fc69, infoPort=33439, infoSecurePort=0, ipcPort=39613, storageInfo=lv=-57;cid=testClusterID;nsid=320851164;c=1689286535402), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 22:15:37,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xae5756dd50ae13e7: Processing first storage report for DS-376770af-b2f3-4ff0-acd7-139c06bd622e from datanode 74606711-abfc-42f5-81ca-22688d879c43 2023-07-13 22:15:37,998 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xae5756dd50ae13e7: from storage DS-376770af-b2f3-4ff0-acd7-139c06bd622e node DatanodeRegistration(127.0.0.1:43751, datanodeUuid=74606711-abfc-42f5-81ca-22688d879c43, infoPort=35205, infoSecurePort=0, ipcPort=36819, storageInfo=lv=-57;cid=testClusterID;nsid=320851164;c=1689286535402), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:15:37,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3fe1817e7bfad223: Processing first storage report for DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836 from datanode 03f9678b-7844-44a6-b9df-b4e10718e2b5 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3fe1817e7bfad223: from storage DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836 node DatanodeRegistration(127.0.0.1:46097, datanodeUuid=03f9678b-7844-44a6-b9df-b4e10718e2b5, infoPort=37521, infoSecurePort=0, ipcPort=45649, storageInfo=lv=-57;cid=testClusterID;nsid=320851164;c=1689286535402), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3ac6bf280f2c2150: Processing first storage report for DS-fc5b4a50-4218-46ed-acb2-e3561b9ee2a0 from datanode 082a6246-a47b-43dc-8198-7d1fbd11fc69 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3ac6bf280f2c2150: from storage DS-fc5b4a50-4218-46ed-acb2-e3561b9ee2a0 node DatanodeRegistration(127.0.0.1:35707, datanodeUuid=082a6246-a47b-43dc-8198-7d1fbd11fc69, infoPort=33439, infoSecurePort=0, ipcPort=39613, storageInfo=lv=-57;cid=testClusterID;nsid=320851164;c=1689286535402), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xae5756dd50ae13e7: Processing first storage report for DS-29a85811-b1ba-4aeb-9b57-f8fb824bd341 from datanode 74606711-abfc-42f5-81ca-22688d879c43 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xae5756dd50ae13e7: from storage DS-29a85811-b1ba-4aeb-9b57-f8fb824bd341 node DatanodeRegistration(127.0.0.1:43751, datanodeUuid=74606711-abfc-42f5-81ca-22688d879c43, infoPort=35205, infoSecurePort=0, ipcPort=36819, storageInfo=lv=-57;cid=testClusterID;nsid=320851164;c=1689286535402), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3fe1817e7bfad223: Processing first storage report for DS-d6387d0c-acdd-4526-b9e0-5ca2086a0af9 from datanode 03f9678b-7844-44a6-b9df-b4e10718e2b5 2023-07-13 22:15:37,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3fe1817e7bfad223: from storage DS-d6387d0c-acdd-4526-b9e0-5ca2086a0af9 node DatanodeRegistration(127.0.0.1:46097, datanodeUuid=03f9678b-7844-44a6-b9df-b4e10718e2b5, infoPort=37521, infoSecurePort=0, ipcPort=45649, storageInfo=lv=-57;cid=testClusterID;nsid=320851164;c=1689286535402), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:15:38,023 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad 2023-07-13 22:15:38,121 INFO [Listener at localhost/39613] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/zookeeper_0, clientPort=54493, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 22:15:38,142 INFO [Listener at localhost/39613] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54493 2023-07-13 22:15:38,150 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:38,153 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:38,810 INFO [Listener at localhost/39613] util.FSUtils(471): Created version file at hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c with version=8 2023-07-13 22:15:38,810 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/hbase-staging 2023-07-13 22:15:38,818 DEBUG [Listener at localhost/39613] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 22:15:38,819 DEBUG [Listener at localhost/39613] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 22:15:38,819 DEBUG [Listener at localhost/39613] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 22:15:38,819 DEBUG [Listener at localhost/39613] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 22:15:39,200 INFO [Listener at localhost/39613] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-13 22:15:39,720 INFO [Listener at localhost/39613] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:15:39,756 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:39,757 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:39,757 INFO [Listener at localhost/39613] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:15:39,757 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:39,757 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:15:39,919 INFO [Listener at localhost/39613] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:15:40,021 DEBUG [Listener at localhost/39613] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-13 22:15:40,119 INFO [Listener at localhost/39613] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34777 2023-07-13 22:15:40,130 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:40,132 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:40,153 INFO [Listener at localhost/39613] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34777 connecting to ZooKeeper ensemble=127.0.0.1:54493 2023-07-13 22:15:40,203 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:347770x0, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:15:40,207 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34777-0x10160c1767c0000 connected 2023-07-13 22:15:40,246 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:15:40,247 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:15:40,251 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:15:40,263 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34777 2023-07-13 22:15:40,263 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34777 2023-07-13 22:15:40,263 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34777 2023-07-13 22:15:40,267 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34777 2023-07-13 22:15:40,267 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34777 2023-07-13 22:15:40,302 INFO [Listener at localhost/39613] log.Log(170): Logging initialized @6951ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-13 22:15:40,435 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:15:40,436 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:15:40,436 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:15:40,438 INFO [Listener at localhost/39613] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 22:15:40,438 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:15:40,438 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:15:40,442 INFO [Listener at localhost/39613] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:15:40,502 INFO [Listener at localhost/39613] http.HttpServer(1146): Jetty bound to port 39373 2023-07-13 22:15:40,504 INFO [Listener at localhost/39613] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:15:40,546 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:40,549 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41cde975{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:15:40,550 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:40,550 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30ab4443{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:15:40,719 INFO [Listener at localhost/39613] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:15:40,732 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:15:40,732 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:15:40,734 INFO [Listener at localhost/39613] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:15:40,742 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:40,771 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4ab75c3a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/jetty-0_0_0_0-39373-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7366604862510656126/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 22:15:40,787 INFO [Listener at localhost/39613] server.AbstractConnector(333): Started ServerConnector@695b889{HTTP/1.1, (http/1.1)}{0.0.0.0:39373} 2023-07-13 22:15:40,787 INFO [Listener at localhost/39613] server.Server(415): Started @7436ms 2023-07-13 22:15:40,792 INFO [Listener at localhost/39613] master.HMaster(444): hbase.rootdir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c, hbase.cluster.distributed=false 2023-07-13 22:15:40,865 INFO [Listener at localhost/39613] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:15:40,865 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:40,866 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:40,866 INFO [Listener at localhost/39613] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:15:40,866 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:40,866 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:15:40,872 INFO [Listener at localhost/39613] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:15:40,874 INFO [Listener at localhost/39613] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39325 2023-07-13 22:15:40,877 INFO [Listener at localhost/39613] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:15:40,884 DEBUG [Listener at localhost/39613] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:15:40,885 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:40,887 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:40,888 INFO [Listener at localhost/39613] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39325 connecting to ZooKeeper ensemble=127.0.0.1:54493 2023-07-13 22:15:40,893 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:393250x0, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:15:40,894 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39325-0x10160c1767c0001 connected 2023-07-13 22:15:40,894 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:15:40,896 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:15:40,896 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:15:40,897 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39325 2023-07-13 22:15:40,897 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39325 2023-07-13 22:15:40,898 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39325 2023-07-13 22:15:40,899 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39325 2023-07-13 22:15:40,899 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39325 2023-07-13 22:15:40,901 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:15:40,901 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:15:40,901 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:15:40,902 INFO [Listener at localhost/39613] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:15:40,902 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:15:40,903 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:15:40,903 INFO [Listener at localhost/39613] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:15:40,904 INFO [Listener at localhost/39613] http.HttpServer(1146): Jetty bound to port 37439 2023-07-13 22:15:40,905 INFO [Listener at localhost/39613] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:15:40,907 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:40,907 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@327aeb51{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:15:40,908 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:40,908 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2aa51e1f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:15:41,032 INFO [Listener at localhost/39613] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:15:41,034 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:15:41,034 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:15:41,035 INFO [Listener at localhost/39613] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 22:15:41,036 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,040 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@11ebf93c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/jetty-0_0_0_0-37439-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9122478129438207390/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:15:41,041 INFO [Listener at localhost/39613] server.AbstractConnector(333): Started ServerConnector@5bac19f3{HTTP/1.1, (http/1.1)}{0.0.0.0:37439} 2023-07-13 22:15:41,041 INFO [Listener at localhost/39613] server.Server(415): Started @7690ms 2023-07-13 22:15:41,054 INFO [Listener at localhost/39613] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:15:41,054 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:41,054 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:41,055 INFO [Listener at localhost/39613] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:15:41,055 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:41,055 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:15:41,055 INFO [Listener at localhost/39613] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:15:41,057 INFO [Listener at localhost/39613] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39109 2023-07-13 22:15:41,058 INFO [Listener at localhost/39613] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:15:41,059 DEBUG [Listener at localhost/39613] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:15:41,060 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:41,062 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:41,063 INFO [Listener at localhost/39613] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39109 connecting to ZooKeeper ensemble=127.0.0.1:54493 2023-07-13 22:15:41,067 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:391090x0, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:15:41,068 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:391090x0, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:15:41,069 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39109-0x10160c1767c0002 connected 2023-07-13 22:15:41,070 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:15:41,071 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:15:41,072 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39109 2023-07-13 22:15:41,074 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39109 2023-07-13 22:15:41,075 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39109 2023-07-13 22:15:41,075 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39109 2023-07-13 22:15:41,076 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39109 2023-07-13 22:15:41,078 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:15:41,078 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:15:41,079 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:15:41,079 INFO [Listener at localhost/39613] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:15:41,079 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:15:41,079 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:15:41,080 INFO [Listener at localhost/39613] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:15:41,080 INFO [Listener at localhost/39613] http.HttpServer(1146): Jetty bound to port 39197 2023-07-13 22:15:41,080 INFO [Listener at localhost/39613] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:15:41,091 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,091 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68e3f8ea{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:15:41,092 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,092 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5eb994c2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:15:41,226 INFO [Listener at localhost/39613] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:15:41,227 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:15:41,227 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:15:41,227 INFO [Listener at localhost/39613] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:15:41,228 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,229 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3757a44c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/jetty-0_0_0_0-39197-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2679584572150510112/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:15:41,230 INFO [Listener at localhost/39613] server.AbstractConnector(333): Started ServerConnector@23f51881{HTTP/1.1, (http/1.1)}{0.0.0.0:39197} 2023-07-13 22:15:41,231 INFO [Listener at localhost/39613] server.Server(415): Started @7879ms 2023-07-13 22:15:41,243 INFO [Listener at localhost/39613] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:15:41,243 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:41,243 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:41,243 INFO [Listener at localhost/39613] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:15:41,244 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:41,244 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:15:41,244 INFO [Listener at localhost/39613] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:15:41,245 INFO [Listener at localhost/39613] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38543 2023-07-13 22:15:41,246 INFO [Listener at localhost/39613] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:15:41,247 DEBUG [Listener at localhost/39613] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:15:41,249 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:41,250 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:41,252 INFO [Listener at localhost/39613] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38543 connecting to ZooKeeper ensemble=127.0.0.1:54493 2023-07-13 22:15:41,257 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:385430x0, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:15:41,259 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:385430x0, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:15:41,259 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38543-0x10160c1767c0003 connected 2023-07-13 22:15:41,260 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:15:41,261 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:15:41,262 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38543 2023-07-13 22:15:41,263 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38543 2023-07-13 22:15:41,263 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38543 2023-07-13 22:15:41,267 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38543 2023-07-13 22:15:41,267 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38543 2023-07-13 22:15:41,269 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:15:41,269 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:15:41,270 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:15:41,270 INFO [Listener at localhost/39613] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:15:41,270 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:15:41,271 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:15:41,271 INFO [Listener at localhost/39613] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:15:41,272 INFO [Listener at localhost/39613] http.HttpServer(1146): Jetty bound to port 39897 2023-07-13 22:15:41,272 INFO [Listener at localhost/39613] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:15:41,281 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,281 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39b957cc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:15:41,282 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,282 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@48aea874{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:15:41,399 INFO [Listener at localhost/39613] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:15:41,400 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:15:41,400 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:15:41,400 INFO [Listener at localhost/39613] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 22:15:41,401 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:41,402 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@13a27dea{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/jetty-0_0_0_0-39897-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5340952768695644885/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:15:41,403 INFO [Listener at localhost/39613] server.AbstractConnector(333): Started ServerConnector@5f367c60{HTTP/1.1, (http/1.1)}{0.0.0.0:39897} 2023-07-13 22:15:41,404 INFO [Listener at localhost/39613] server.Server(415): Started @8052ms 2023-07-13 22:15:41,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:15:41,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7375106d{HTTP/1.1, (http/1.1)}{0.0.0.0:32917} 2023-07-13 22:15:41,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8063ms 2023-07-13 22:15:41,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:41,424 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 22:15:41,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:41,445 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:15:41,445 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:15:41,445 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:15:41,445 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:15:41,446 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:41,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:15:41,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34777,1689286538976 from backup master directory 2023-07-13 22:15:41,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:15:41,454 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:41,454 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 22:15:41,455 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:15:41,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:41,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-13 22:15:41,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-13 22:15:41,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/hbase.id with ID: 17faef0c-e578-4c44-a17c-0f33c27cbe4c 2023-07-13 22:15:41,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:41,643 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:41,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3c694acc to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:41,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d9104a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:41,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:41,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 22:15:41,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-13 22:15:41,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-13 22:15:41,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 22:15:41,800 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 22:15:41,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:41,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store-tmp 2023-07-13 22:15:41,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:41,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 22:15:41,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:15:41,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:15:41,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 22:15:41,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:15:41,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:15:41,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:15:41,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/WALs/jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:41,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34777%2C1689286538976, suffix=, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/WALs/jenkins-hbase4.apache.org,34777,1689286538976, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/oldWALs, maxLogs=10 2023-07-13 22:15:42,010 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:42,010 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:42,010 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:42,020 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 22:15:42,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/WALs/jenkins-hbase4.apache.org,34777,1689286538976/jenkins-hbase4.apache.org%2C34777%2C1689286538976.1689286541931 2023-07-13 22:15:42,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK], DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK], DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK]] 2023-07-13 22:15:42,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:42,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:42,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:15:42,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:15:42,187 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:15:42,196 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 22:15:42,228 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 22:15:42,240 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:42,245 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:15:42,247 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:15:42,267 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:15:42,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:42,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11861559200, jitterRate=0.10469378530979156}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:42,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:15:42,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 22:15:42,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 22:15:42,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 22:15:42,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 22:15:42,312 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-13 22:15:42,355 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 43 msec 2023-07-13 22:15:42,356 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 22:15:42,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 22:15:42,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 22:15:42,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 22:15:42,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 22:15:42,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 22:15:42,416 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:42,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 22:15:42,418 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 22:15:42,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 22:15:42,443 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:15:42,443 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:15:42,443 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:15:42,443 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:15:42,443 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:42,444 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34777,1689286538976, sessionid=0x10160c1767c0000, setting cluster-up flag (Was=false) 2023-07-13 22:15:42,464 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:42,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 22:15:42,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:42,481 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:42,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 22:15:42,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:42,491 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.hbase-snapshot/.tmp 2023-07-13 22:15:42,521 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(951): ClusterId : 17faef0c-e578-4c44-a17c-0f33c27cbe4c 2023-07-13 22:15:42,529 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(951): ClusterId : 17faef0c-e578-4c44-a17c-0f33c27cbe4c 2023-07-13 22:15:42,534 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(951): ClusterId : 17faef0c-e578-4c44-a17c-0f33c27cbe4c 2023-07-13 22:15:42,537 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:15:42,537 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:15:42,537 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:15:42,550 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:15:42,550 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:15:42,550 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:15:42,550 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:15:42,550 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:15:42,550 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:15:42,556 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:15:42,557 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:15:42,560 DEBUG [RS:1;jenkins-hbase4:39109] zookeeper.ReadOnlyZKClient(139): Connect 0x5e632b08 to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:42,564 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:15:42,579 DEBUG [RS:2;jenkins-hbase4:38543] zookeeper.ReadOnlyZKClient(139): Connect 0x626c4d03 to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:42,583 DEBUG [RS:0;jenkins-hbase4:39325] zookeeper.ReadOnlyZKClient(139): Connect 0x1426688c to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:42,609 DEBUG [RS:2;jenkins-hbase4:38543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@387c0a54, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:42,610 DEBUG [RS:2;jenkins-hbase4:38543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ac55ad4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:15:42,613 DEBUG [RS:1;jenkins-hbase4:39109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12b141df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:42,613 DEBUG [RS:1;jenkins-hbase4:39109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2660f13e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:15:42,615 DEBUG [RS:0;jenkins-hbase4:39325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2702f66a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:42,615 DEBUG [RS:0;jenkins-hbase4:39325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d828c2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:15:42,643 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39109 2023-07-13 22:15:42,646 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39325 2023-07-13 22:15:42,653 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38543 2023-07-13 22:15:42,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 22:15:42,655 INFO [RS:1;jenkins-hbase4:39109] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:15:42,656 INFO [RS:1;jenkins-hbase4:39109] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:15:42,656 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:15:42,656 INFO [RS:0;jenkins-hbase4:39325] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:15:42,658 INFO [RS:0;jenkins-hbase4:39325] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:15:42,658 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:15:42,658 INFO [RS:2;jenkins-hbase4:38543] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:15:42,658 INFO [RS:2;jenkins-hbase4:38543] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:15:42,658 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:15:42,671 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:38543, startcode=1689286541242 2023-07-13 22:15:42,671 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:39325, startcode=1689286540864 2023-07-13 22:15:42,674 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:39109, startcode=1689286541053 2023-07-13 22:15:42,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 22:15:42,684 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:15:42,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 22:15:42,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 22:15:42,697 DEBUG [RS:0;jenkins-hbase4:39325] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:15:42,697 DEBUG [RS:2;jenkins-hbase4:38543] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:15:42,697 DEBUG [RS:1;jenkins-hbase4:39109] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:15:42,779 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58041, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:15:42,779 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56967, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:15:42,779 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47263, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:15:42,795 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:42,803 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 22:15:42,805 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:42,807 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:42,834 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 22:15:42,835 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 22:15:42,835 WARN [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 22:15:42,834 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 22:15:42,835 WARN [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 22:15:42,835 WARN [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 22:15:42,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 22:15:42,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 22:15:42,855 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 22:15:42,855 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 22:15:42,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:15:42,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:42,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689286572859 2023-07-13 22:15:42,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 22:15:42,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 22:15:42,867 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 22:15:42,868 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 22:15:42,871 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:42,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 22:15:42,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 22:15:42,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 22:15:42,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 22:15:42,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:42,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 22:15:42,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 22:15:42,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 22:15:42,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 22:15:42,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 22:15:42,888 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286542888,5,FailOnTimeoutGroup] 2023-07-13 22:15:42,890 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286542889,5,FailOnTimeoutGroup] 2023-07-13 22:15:42,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:42,891 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 22:15:42,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:42,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:42,932 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:42,934 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:42,934 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c 2023-07-13 22:15:42,936 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:39109, startcode=1689286541053 2023-07-13 22:15:42,936 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:39325, startcode=1689286540864 2023-07-13 22:15:42,936 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:38543, startcode=1689286541242 2023-07-13 22:15:42,943 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:42,945 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:15:42,945 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 22:15:42,950 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:42,951 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:15:42,951 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 22:15:42,951 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:42,951 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:15:42,951 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 22:15:42,953 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c 2023-07-13 22:15:42,953 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42191 2023-07-13 22:15:42,953 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39373 2023-07-13 22:15:42,960 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c 2023-07-13 22:15:42,960 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42191 2023-07-13 22:15:42,960 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39373 2023-07-13 22:15:42,960 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c 2023-07-13 22:15:42,961 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42191 2023-07-13 22:15:42,961 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39373 2023-07-13 22:15:42,965 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:15:42,973 DEBUG [RS:1;jenkins-hbase4:39109] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:42,973 DEBUG [RS:2;jenkins-hbase4:38543] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:42,973 WARN [RS:1;jenkins-hbase4:39109] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:15:42,973 WARN [RS:2;jenkins-hbase4:38543] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:15:42,974 INFO [RS:1;jenkins-hbase4:39109] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:42,975 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39325,1689286540864] 2023-07-13 22:15:42,979 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39109,1689286541053] 2023-07-13 22:15:42,979 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:42,979 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38543,1689286541242] 2023-07-13 22:15:42,974 DEBUG [RS:0;jenkins-hbase4:39325] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:42,974 INFO [RS:2;jenkins-hbase4:38543] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:42,983 WARN [RS:0;jenkins-hbase4:39325] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:15:42,984 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:42,984 INFO [RS:0;jenkins-hbase4:39325] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:42,984 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:42,996 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:43,006 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:15:43,009 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info 2023-07-13 22:15:43,010 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:15:43,011 DEBUG [RS:1;jenkins-hbase4:39109] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:43,011 DEBUG [RS:0;jenkins-hbase4:39325] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:43,012 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:43,012 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:15:43,012 DEBUG [RS:2;jenkins-hbase4:38543] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:43,012 DEBUG [RS:0;jenkins-hbase4:39325] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,012 DEBUG [RS:1;jenkins-hbase4:39109] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,013 DEBUG [RS:2;jenkins-hbase4:38543] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,013 DEBUG [RS:0;jenkins-hbase4:39325] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:43,013 DEBUG [RS:1;jenkins-hbase4:39109] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:43,014 DEBUG [RS:2;jenkins-hbase4:38543] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:43,015 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:15:43,016 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:15:43,017 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:43,018 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:15:43,024 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table 2023-07-13 22:15:43,025 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:15:43,027 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:43,028 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:15:43,028 DEBUG [RS:2;jenkins-hbase4:38543] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:15:43,030 DEBUG [RS:1;jenkins-hbase4:39109] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:15:43,030 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740 2023-07-13 22:15:43,047 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740 2023-07-13 22:15:43,052 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:15:43,053 INFO [RS:0;jenkins-hbase4:39325] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:15:43,053 INFO [RS:2;jenkins-hbase4:38543] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:15:43,054 INFO [RS:1;jenkins-hbase4:39109] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:15:43,055 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:15:43,062 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:43,063 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11952301760, jitterRate=0.11314484477043152}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:15:43,063 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:15:43,063 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:15:43,063 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:15:43,063 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:15:43,063 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:15:43,063 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:15:43,064 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:15:43,064 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:15:43,073 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 22:15:43,074 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 22:15:43,081 INFO [RS:0;jenkins-hbase4:39325] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:15:43,083 INFO [RS:2;jenkins-hbase4:38543] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:15:43,083 INFO [RS:1;jenkins-hbase4:39109] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:15:43,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 22:15:43,088 INFO [RS:2;jenkins-hbase4:38543] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:15:43,088 INFO [RS:1;jenkins-hbase4:39109] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:15:43,088 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,088 INFO [RS:0;jenkins-hbase4:39325] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:15:43,089 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,089 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,090 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:15:43,098 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:15:43,098 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:15:43,102 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 22:15:43,106 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 22:15:43,110 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,110 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,110 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,110 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,111 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:15:43,112 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:15:43,111 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:15:43,112 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:2;jenkins-hbase4:38543] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,112 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,113 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,113 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,113 DEBUG [RS:0;jenkins-hbase4:39325] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,113 DEBUG [RS:1;jenkins-hbase4:39109] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:43,119 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,119 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,119 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,119 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,119 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,119 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,122 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,123 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,123 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,144 INFO [RS:1;jenkins-hbase4:39109] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:15:43,144 INFO [RS:2;jenkins-hbase4:38543] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:15:43,144 INFO [RS:0;jenkins-hbase4:39325] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:15:43,149 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38543,1689286541242-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,149 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39109,1689286541053-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,149 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39325,1689286540864-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,177 INFO [RS:1;jenkins-hbase4:39109] regionserver.Replication(203): jenkins-hbase4.apache.org,39109,1689286541053 started 2023-07-13 22:15:43,177 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39109,1689286541053, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39109, sessionid=0x10160c1767c0002 2023-07-13 22:15:43,177 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:15:43,177 DEBUG [RS:1;jenkins-hbase4:39109] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:43,177 DEBUG [RS:1;jenkins-hbase4:39109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39109,1689286541053' 2023-07-13 22:15:43,177 DEBUG [RS:1;jenkins-hbase4:39109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:15:43,178 INFO [RS:0;jenkins-hbase4:39325] regionserver.Replication(203): jenkins-hbase4.apache.org,39325,1689286540864 started 2023-07-13 22:15:43,178 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39325,1689286540864, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39325, sessionid=0x10160c1767c0001 2023-07-13 22:15:43,178 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:15:43,178 DEBUG [RS:0;jenkins-hbase4:39325] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,179 DEBUG [RS:0;jenkins-hbase4:39325] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39325,1689286540864' 2023-07-13 22:15:43,180 DEBUG [RS:0;jenkins-hbase4:39325] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:15:43,181 DEBUG [RS:0;jenkins-hbase4:39325] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:15:43,181 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:15:43,181 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:15:43,181 DEBUG [RS:0;jenkins-hbase4:39325] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,182 DEBUG [RS:0;jenkins-hbase4:39325] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39325,1689286540864' 2023-07-13 22:15:43,182 DEBUG [RS:0;jenkins-hbase4:39325] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:15:43,184 DEBUG [RS:0;jenkins-hbase4:39325] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:15:43,184 DEBUG [RS:1;jenkins-hbase4:39109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:15:43,185 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:15:43,185 DEBUG [RS:0;jenkins-hbase4:39325] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:15:43,185 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:15:43,185 INFO [RS:0;jenkins-hbase4:39325] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:15:43,189 INFO [RS:2;jenkins-hbase4:38543] regionserver.Replication(203): jenkins-hbase4.apache.org,38543,1689286541242 started 2023-07-13 22:15:43,185 DEBUG [RS:1;jenkins-hbase4:39109] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:43,189 INFO [RS:0;jenkins-hbase4:39325] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:15:43,189 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38543,1689286541242, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38543, sessionid=0x10160c1767c0003 2023-07-13 22:15:43,189 DEBUG [RS:1;jenkins-hbase4:39109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39109,1689286541053' 2023-07-13 22:15:43,189 DEBUG [RS:1;jenkins-hbase4:39109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:15:43,190 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:15:43,190 DEBUG [RS:2;jenkins-hbase4:38543] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:43,190 DEBUG [RS:2;jenkins-hbase4:38543] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38543,1689286541242' 2023-07-13 22:15:43,190 DEBUG [RS:2;jenkins-hbase4:38543] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:15:43,190 DEBUG [RS:1;jenkins-hbase4:39109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:15:43,235 DEBUG [RS:2;jenkins-hbase4:38543] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:15:43,235 DEBUG [RS:1;jenkins-hbase4:39109] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:15:43,235 INFO [RS:1;jenkins-hbase4:39109] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:15:43,236 INFO [RS:1;jenkins-hbase4:39109] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:15:43,239 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:15:43,239 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:15:43,239 DEBUG [RS:2;jenkins-hbase4:38543] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:43,239 DEBUG [RS:2;jenkins-hbase4:38543] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38543,1689286541242' 2023-07-13 22:15:43,239 DEBUG [RS:2;jenkins-hbase4:38543] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:15:43,241 DEBUG [RS:2;jenkins-hbase4:38543] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:15:43,242 DEBUG [RS:2;jenkins-hbase4:38543] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:15:43,243 INFO [RS:2;jenkins-hbase4:38543] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:15:43,243 INFO [RS:2;jenkins-hbase4:38543] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:15:43,259 DEBUG [jenkins-hbase4:34777] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 22:15:43,274 DEBUG [jenkins-hbase4:34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:43,277 DEBUG [jenkins-hbase4:34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:43,277 DEBUG [jenkins-hbase4:34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:43,277 DEBUG [jenkins-hbase4:34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:43,277 DEBUG [jenkins-hbase4:34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:43,281 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39325,1689286540864, state=OPENING 2023-07-13 22:15:43,290 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 22:15:43,292 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:43,293 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:15:43,297 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:43,301 INFO [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39325%2C1689286540864, suffix=, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39325,1689286540864, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs, maxLogs=32 2023-07-13 22:15:43,326 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:43,326 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:43,335 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:43,341 INFO [RS:1;jenkins-hbase4:39109] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39109%2C1689286541053, suffix=, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39109,1689286541053, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs, maxLogs=32 2023-07-13 22:15:43,347 INFO [RS:2;jenkins-hbase4:38543] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38543%2C1689286541242, suffix=, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,38543,1689286541242, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs, maxLogs=32 2023-07-13 22:15:43,352 INFO [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39325,1689286540864/jenkins-hbase4.apache.org%2C39325%2C1689286540864.1689286543304 2023-07-13 22:15:43,352 DEBUG [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK], DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK], DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK]] 2023-07-13 22:15:43,386 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:43,386 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:43,387 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:43,395 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:43,396 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:43,396 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:43,407 INFO [RS:2;jenkins-hbase4:38543] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,38543,1689286541242/jenkins-hbase4.apache.org%2C38543%2C1689286541242.1689286543348 2023-07-13 22:15:43,407 INFO [RS:1;jenkins-hbase4:39109] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39109,1689286541053/jenkins-hbase4.apache.org%2C39109%2C1689286541053.1689286543347 2023-07-13 22:15:43,407 DEBUG [RS:2;jenkins-hbase4:38543] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK], DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK], DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK]] 2023-07-13 22:15:43,407 DEBUG [RS:1;jenkins-hbase4:39109] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK], DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK], DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK]] 2023-07-13 22:15:43,451 WARN [ReadOnlyZKClient-127.0.0.1:54493@0x3c694acc] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 22:15:43,476 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,477 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:43,481 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:43,487 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40320, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:43,487 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40330, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:43,488 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39325] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:40320 deadline: 1689286603487, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:43,500 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 22:15:43,500 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:43,504 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39325%2C1689286540864.meta, suffix=.meta, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39325,1689286540864, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs, maxLogs=32 2023-07-13 22:15:43,521 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:43,523 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:43,523 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:43,527 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39325,1689286540864/jenkins-hbase4.apache.org%2C39325%2C1689286540864.meta.1689286543505.meta 2023-07-13 22:15:43,528 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK], DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK], DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK]] 2023-07-13 22:15:43,528 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:43,530 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:15:43,532 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 22:15:43,534 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 22:15:43,540 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 22:15:43,540 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:43,540 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 22:15:43,540 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 22:15:43,543 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:15:43,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info 2023-07-13 22:15:43,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info 2023-07-13 22:15:43,545 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:15:43,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:43,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:15:43,548 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:15:43,548 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:15:43,548 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:15:43,549 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:43,549 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:15:43,551 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table 2023-07-13 22:15:43,551 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table 2023-07-13 22:15:43,551 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:15:43,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:43,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740 2023-07-13 22:15:43,557 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740 2023-07-13 22:15:43,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:15:43,563 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:15:43,565 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10358110080, jitterRate=-0.035325825214385986}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:15:43,565 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:15:43,578 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689286543473 2023-07-13 22:15:43,597 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 22:15:43,598 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 22:15:43,599 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39325,1689286540864, state=OPEN 2023-07-13 22:15:43,602 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 22:15:43,602 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:15:43,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 22:15:43,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39325,1689286540864 in 305 msec 2023-07-13 22:15:43,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 22:15:43,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 524 msec 2023-07-13 22:15:43,623 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 922 msec 2023-07-13 22:15:43,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689286543623, completionTime=-1 2023-07-13 22:15:43,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 22:15:43,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 22:15:43,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 22:15:43,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689286603675 2023-07-13 22:15:43,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689286663675 2023-07-13 22:15:43,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 51 msec 2023-07-13 22:15:43,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34777,1689286538976-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34777,1689286538976-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34777,1689286538976-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34777, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:43,704 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 22:15:43,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 22:15:43,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:43,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 22:15:43,730 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:15:43,733 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:15:43,750 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:43,753 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 empty. 2023-07-13 22:15:43,754 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:43,754 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 22:15:43,790 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:43,792 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b4a18ea8d84755db0befaf862f1698a9, NAME => 'hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:43,818 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:43,818 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b4a18ea8d84755db0befaf862f1698a9, disabling compactions & flushes 2023-07-13 22:15:43,818 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:43,818 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:43,818 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. after waiting 0 ms 2023-07-13 22:15:43,819 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:43,819 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:43,819 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b4a18ea8d84755db0befaf862f1698a9: 2023-07-13 22:15:43,824 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:15:43,841 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286543828"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286543828"}]},"ts":"1689286543828"} 2023-07-13 22:15:43,869 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:15:43,871 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:15:43,875 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286543871"}]},"ts":"1689286543871"} 2023-07-13 22:15:43,879 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 22:15:43,883 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:43,883 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:43,883 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:43,883 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:43,883 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:43,885 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, ASSIGN}] 2023-07-13 22:15:43,887 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, ASSIGN 2023-07-13 22:15:43,889 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:44,007 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:44,010 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 22:15:44,013 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:15:44,015 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:15:44,019 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,020 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab empty. 2023-07-13 22:15:44,020 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,020 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 22:15:44,040 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:15:44,042 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b4a18ea8d84755db0befaf862f1698a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:44,043 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286544042"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286544042"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286544042"}]},"ts":"1689286544042"} 2023-07-13 22:15:44,048 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:44,049 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure b4a18ea8d84755db0befaf862f1698a9, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:44,050 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c215608c4a51d4b80df51dd910f81bab, NAME => 'hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:44,070 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:44,070 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c215608c4a51d4b80df51dd910f81bab, disabling compactions & flushes 2023-07-13 22:15:44,070 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,070 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,070 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. after waiting 0 ms 2023-07-13 22:15:44,070 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,070 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,070 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c215608c4a51d4b80df51dd910f81bab: 2023-07-13 22:15:44,073 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:15:44,075 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286544075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286544075"}]},"ts":"1689286544075"} 2023-07-13 22:15:44,081 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:15:44,083 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:15:44,083 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286544083"}]},"ts":"1689286544083"} 2023-07-13 22:15:44,085 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 22:15:44,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:44,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:44,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:44,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:44,092 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:44,092 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, ASSIGN}] 2023-07-13 22:15:44,095 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, ASSIGN 2023-07-13 22:15:44,097 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:15:44,204 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:44,204 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:44,207 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:44,214 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:44,215 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4a18ea8d84755db0befaf862f1698a9, NAME => 'hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:44,215 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:44,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,218 INFO [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,220 DEBUG [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info 2023-07-13 22:15:44,220 DEBUG [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info 2023-07-13 22:15:44,221 INFO [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4a18ea8d84755db0befaf862f1698a9 columnFamilyName info 2023-07-13 22:15:44,221 INFO [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] regionserver.HStore(310): Store=b4a18ea8d84755db0befaf862f1698a9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:44,223 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,224 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,228 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:44,231 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:44,232 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4a18ea8d84755db0befaf862f1698a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9986940960, jitterRate=-0.06989364326000214}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:44,232 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4a18ea8d84755db0befaf862f1698a9: 2023-07-13 22:15:44,234 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9., pid=7, masterSystemTime=1689286544204 2023-07-13 22:15:44,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:44,239 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:44,240 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b4a18ea8d84755db0befaf862f1698a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:44,241 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286544240"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286544240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286544240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286544240"}]},"ts":"1689286544240"} 2023-07-13 22:15:44,247 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:15:44,248 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=c215608c4a51d4b80df51dd910f81bab, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:44,249 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286544248"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286544248"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286544248"}]},"ts":"1689286544248"} 2023-07-13 22:15:44,254 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:44,254 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-13 22:15:44,254 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure b4a18ea8d84755db0befaf862f1698a9, server=jenkins-hbase4.apache.org,38543,1689286541242 in 196 msec 2023-07-13 22:15:44,259 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 22:15:44,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, ASSIGN in 369 msec 2023-07-13 22:15:44,261 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:15:44,262 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286544262"}]},"ts":"1689286544262"} 2023-07-13 22:15:44,264 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 22:15:44,268 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:15:44,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 552 msec 2023-07-13 22:15:44,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 22:15:44,332 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:15:44,332 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:44,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:44,359 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33760, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:44,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 22:15:44,388 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:15:44,395 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 30 msec 2023-07-13 22:15:44,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 22:15:44,410 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-13 22:15:44,410 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 22:15:44,413 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,414 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c215608c4a51d4b80df51dd910f81bab, NAME => 'hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:44,414 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:15:44,414 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. service=MultiRowMutationService 2023-07-13 22:15:44,415 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 22:15:44,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:44,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,417 INFO [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,419 DEBUG [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m 2023-07-13 22:15:44,419 DEBUG [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m 2023-07-13 22:15:44,420 INFO [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c215608c4a51d4b80df51dd910f81bab columnFamilyName m 2023-07-13 22:15:44,421 INFO [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] regionserver.HStore(310): Store=c215608c4a51d4b80df51dd910f81bab/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:44,423 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,428 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:44,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:44,432 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c215608c4a51d4b80df51dd910f81bab; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@65a9f0ec, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:44,432 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c215608c4a51d4b80df51dd910f81bab: 2023-07-13 22:15:44,433 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab., pid=9, masterSystemTime=1689286544409 2023-07-13 22:15:44,435 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,436 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:44,437 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=c215608c4a51d4b80df51dd910f81bab, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:44,437 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286544437"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286544437"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286544437"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286544437"}]},"ts":"1689286544437"} 2023-07-13 22:15:44,443 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-13 22:15:44,443 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,39325,1689286540864 in 186 msec 2023-07-13 22:15:44,447 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-13 22:15:44,449 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, ASSIGN in 351 msec 2023-07-13 22:15:44,459 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:15:44,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 58 msec 2023-07-13 22:15:44,467 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:15:44,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286544467"}]},"ts":"1689286544467"} 2023-07-13 22:15:44,469 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 22:15:44,472 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:15:44,475 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 464 msec 2023-07-13 22:15:44,475 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 22:15:44,478 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 22:15:44,478 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.023sec 2023-07-13 22:15:44,480 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 22:15:44,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 22:15:44,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 22:15:44,483 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34777,1689286538976-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 22:15:44,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34777,1689286538976-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 22:15:44,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 22:15:44,517 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 22:15:44,517 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 22:15:44,542 DEBUG [Listener at localhost/39613] zookeeper.ReadOnlyZKClient(139): Connect 0x427575b1 to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:44,548 DEBUG [Listener at localhost/39613] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13bd793a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:44,563 DEBUG [hconnection-0x2414dac3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:44,576 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40332, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:44,582 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:44,582 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:44,585 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:15:44,586 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:15:44,588 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:44,590 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 22:15:44,596 DEBUG [Listener at localhost/39613] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 22:15:44,600 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 22:15:44,615 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 22:15:44,615 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:15:44,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 22:15:44,621 DEBUG [Listener at localhost/39613] zookeeper.ReadOnlyZKClient(139): Connect 0x735e01b3 to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:44,628 DEBUG [Listener at localhost/39613] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5faade10, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:44,629 INFO [Listener at localhost/39613] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54493 2023-07-13 22:15:44,631 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:15:44,633 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10160c1767c000a connected 2023-07-13 22:15:44,668 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=364, ProcessCount=172, AvailableMemoryMB=5195 2023-07-13 22:15:44,671 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-13 22:15:44,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:44,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:44,745 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 22:15:44,760 INFO [Listener at localhost/39613] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:15:44,760 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:44,760 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:44,760 INFO [Listener at localhost/39613] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:15:44,760 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:15:44,761 INFO [Listener at localhost/39613] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:15:44,761 INFO [Listener at localhost/39613] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:15:44,765 INFO [Listener at localhost/39613] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43571 2023-07-13 22:15:44,766 INFO [Listener at localhost/39613] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:15:44,767 DEBUG [Listener at localhost/39613] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:15:44,769 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:44,772 INFO [Listener at localhost/39613] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:15:44,775 INFO [Listener at localhost/39613] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43571 connecting to ZooKeeper ensemble=127.0.0.1:54493 2023-07-13 22:15:44,780 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:435710x0, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:15:44,782 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(162): regionserver:435710x0, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:15:44,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43571-0x10160c1767c000b connected 2023-07-13 22:15:44,783 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(162): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 22:15:44,784 DEBUG [Listener at localhost/39613] zookeeper.ZKUtil(164): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:15:44,785 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43571 2023-07-13 22:15:44,786 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43571 2023-07-13 22:15:44,786 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43571 2023-07-13 22:15:44,787 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43571 2023-07-13 22:15:44,787 DEBUG [Listener at localhost/39613] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43571 2023-07-13 22:15:44,789 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:15:44,789 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:15:44,789 INFO [Listener at localhost/39613] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:15:44,790 INFO [Listener at localhost/39613] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:15:44,790 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:15:44,790 INFO [Listener at localhost/39613] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:15:44,790 INFO [Listener at localhost/39613] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:15:44,791 INFO [Listener at localhost/39613] http.HttpServer(1146): Jetty bound to port 34961 2023-07-13 22:15:44,791 INFO [Listener at localhost/39613] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:15:44,795 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:44,796 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@65cc72bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:15:44,796 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:44,797 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46fccd5b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:15:44,973 INFO [Listener at localhost/39613] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:15:44,974 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:15:44,974 INFO [Listener at localhost/39613] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:15:44,975 INFO [Listener at localhost/39613] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:15:44,976 INFO [Listener at localhost/39613] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:15:44,978 INFO [Listener at localhost/39613] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6436c29b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/java.io.tmpdir/jetty-0_0_0_0-34961-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3882478872156340568/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:15:44,980 INFO [Listener at localhost/39613] server.AbstractConnector(333): Started ServerConnector@b9fe3db{HTTP/1.1, (http/1.1)}{0.0.0.0:34961} 2023-07-13 22:15:44,980 INFO [Listener at localhost/39613] server.Server(415): Started @11629ms 2023-07-13 22:15:44,987 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(951): ClusterId : 17faef0c-e578-4c44-a17c-0f33c27cbe4c 2023-07-13 22:15:44,987 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:15:44,991 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:15:44,991 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:15:44,993 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:15:44,995 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ReadOnlyZKClient(139): Connect 0x6f7c8fc8 to 127.0.0.1:54493 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:15:45,031 DEBUG [RS:3;jenkins-hbase4:43571] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61bb7c57, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:15:45,031 DEBUG [RS:3;jenkins-hbase4:43571] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6584e4b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:15:45,041 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43571 2023-07-13 22:15:45,042 INFO [RS:3;jenkins-hbase4:43571] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:15:45,042 INFO [RS:3;jenkins-hbase4:43571] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:15:45,042 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:15:45,043 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34777,1689286538976 with isa=jenkins-hbase4.apache.org/172.31.14.131:43571, startcode=1689286544760 2023-07-13 22:15:45,043 DEBUG [RS:3;jenkins-hbase4:43571] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:15:45,050 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38057, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:15:45,050 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34777] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,051 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:15:45,051 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c 2023-07-13 22:15:45,051 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42191 2023-07-13 22:15:45,051 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39373 2023-07-13 22:15:45,058 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:15:45,058 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:15:45,058 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:15:45,058 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:15:45,059 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:45,059 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ZKUtil(162): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,059 WARN [RS:3;jenkins-hbase4:43571] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:15:45,059 INFO [RS:3;jenkins-hbase4:43571] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:45,059 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,059 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43571,1689286544760] 2023-07-13 22:15:45,059 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:15:45,059 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:45,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:45,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:45,085 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34777,1689286538976] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 22:15:45,085 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,086 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,085 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,086 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:45,086 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:45,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:45,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:45,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:45,090 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:45,091 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ZKUtil(162): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:45,092 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ZKUtil(162): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,092 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ZKUtil(162): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:45,093 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ZKUtil(162): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:45,094 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:15:45,095 INFO [RS:3;jenkins-hbase4:43571] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:15:45,099 INFO [RS:3;jenkins-hbase4:43571] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:15:45,099 INFO [RS:3;jenkins-hbase4:43571] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:15:45,099 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:45,100 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:15:45,102 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:45,102 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,102 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,103 DEBUG [RS:3;jenkins-hbase4:43571] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:15:45,109 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:45,109 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:45,109 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:45,125 INFO [RS:3;jenkins-hbase4:43571] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:15:45,125 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43571,1689286544760-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:15:45,136 INFO [RS:3;jenkins-hbase4:43571] regionserver.Replication(203): jenkins-hbase4.apache.org,43571,1689286544760 started 2023-07-13 22:15:45,136 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43571,1689286544760, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43571, sessionid=0x10160c1767c000b 2023-07-13 22:15:45,136 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:15:45,136 DEBUG [RS:3;jenkins-hbase4:43571] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,136 DEBUG [RS:3;jenkins-hbase4:43571] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43571,1689286544760' 2023-07-13 22:15:45,136 DEBUG [RS:3;jenkins-hbase4:43571] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:15:45,137 DEBUG [RS:3;jenkins-hbase4:43571] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:15:45,138 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:15:45,138 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:15:45,138 DEBUG [RS:3;jenkins-hbase4:43571] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,138 DEBUG [RS:3;jenkins-hbase4:43571] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43571,1689286544760' 2023-07-13 22:15:45,138 DEBUG [RS:3;jenkins-hbase4:43571] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:15:45,138 DEBUG [RS:3;jenkins-hbase4:43571] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:15:45,139 DEBUG [RS:3;jenkins-hbase4:43571] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:15:45,139 INFO [RS:3;jenkins-hbase4:43571] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:15:45,139 INFO [RS:3;jenkins-hbase4:43571] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:15:45,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:45,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:45,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:45,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:45,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:45,155 DEBUG [hconnection-0x122baacf-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:45,164 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:45,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:45,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:45,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:45,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:45,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:59834 deadline: 1689287745181, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:45,183 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:45,186 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:45,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:45,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:45,188 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:45,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:45,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:45,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:45,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:45,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:45,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:45,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:45,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:45,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:45,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:45,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:45,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:45,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:45,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:45,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:45,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:45,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:45,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(238): Moving server region b4a18ea8d84755db0befaf862f1698a9, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:45,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, REOPEN/MOVE 2023-07-13 22:15:45,235 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, REOPEN/MOVE 2023-07-13 22:15:45,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 22:15:45,237 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b4a18ea8d84755db0befaf862f1698a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:45,237 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286545237"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286545237"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286545237"}]},"ts":"1689286545237"} 2023-07-13 22:15:45,241 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure b4a18ea8d84755db0befaf862f1698a9, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:45,242 INFO [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43571%2C1689286544760, suffix=, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,43571,1689286544760, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs, maxLogs=32 2023-07-13 22:15:45,302 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:45,303 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:45,303 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:45,313 INFO [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,43571,1689286544760/jenkins-hbase4.apache.org%2C43571%2C1689286544760.1689286545243 2023-07-13 22:15:45,313 DEBUG [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK], DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK], DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK]] 2023-07-13 22:15:45,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4a18ea8d84755db0befaf862f1698a9, disabling compactions & flushes 2023-07-13 22:15:45,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. after waiting 0 ms 2023-07-13 22:15:45,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b4a18ea8d84755db0befaf862f1698a9 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-13 22:15:45,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/.tmp/info/9961363ce0fd40d0a3cccafb91315d71 2023-07-13 22:15:45,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/.tmp/info/9961363ce0fd40d0a3cccafb91315d71 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info/9961363ce0fd40d0a3cccafb91315d71 2023-07-13 22:15:45,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info/9961363ce0fd40d0a3cccafb91315d71, entries=2, sequenceid=6, filesize=4.8 K 2023-07-13 22:15:45,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for b4a18ea8d84755db0befaf862f1698a9 in 161ms, sequenceid=6, compaction requested=false 2023-07-13 22:15:45,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 22:15:45,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-13 22:15:45,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4a18ea8d84755db0befaf862f1698a9: 2023-07-13 22:15:45,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b4a18ea8d84755db0befaf862f1698a9 move to jenkins-hbase4.apache.org,43571,1689286544760 record at close sequenceid=6 2023-07-13 22:15:45,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,597 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b4a18ea8d84755db0befaf862f1698a9, regionState=CLOSED 2023-07-13 22:15:45,597 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286545597"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286545597"}]},"ts":"1689286545597"} 2023-07-13 22:15:45,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 22:15:45,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure b4a18ea8d84755db0befaf862f1698a9, server=jenkins-hbase4.apache.org,38543,1689286541242 in 358 msec 2023-07-13 22:15:45,604 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:45,754 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:15:45,754 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b4a18ea8d84755db0befaf862f1698a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,755 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286545754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286545754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286545754"}]},"ts":"1689286545754"} 2023-07-13 22:15:45,758 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure b4a18ea8d84755db0befaf862f1698a9, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:45,912 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,913 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:45,916 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:45,923 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,923 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4a18ea8d84755db0befaf862f1698a9, NAME => 'hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:45,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:45,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,926 INFO [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,928 DEBUG [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info 2023-07-13 22:15:45,928 DEBUG [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info 2023-07-13 22:15:45,929 INFO [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4a18ea8d84755db0befaf862f1698a9 columnFamilyName info 2023-07-13 22:15:45,947 DEBUG [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] regionserver.HStore(539): loaded hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/info/9961363ce0fd40d0a3cccafb91315d71 2023-07-13 22:15:45,948 INFO [StoreOpener-b4a18ea8d84755db0befaf862f1698a9-1] regionserver.HStore(310): Store=b4a18ea8d84755db0befaf862f1698a9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:45,950 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,974 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:15:45,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4a18ea8d84755db0befaf862f1698a9; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11098672320, jitterRate=0.033644407987594604}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:45,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4a18ea8d84755db0befaf862f1698a9: 2023-07-13 22:15:45,978 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9., pid=14, masterSystemTime=1689286545912 2023-07-13 22:15:45,984 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,985 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:15:45,985 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b4a18ea8d84755db0befaf862f1698a9, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:45,985 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286545985"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286545985"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286545985"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286545985"}]},"ts":"1689286545985"} 2023-07-13 22:15:45,996 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-13 22:15:45,996 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure b4a18ea8d84755db0befaf862f1698a9, server=jenkins-hbase4.apache.org,43571,1689286544760 in 233 msec 2023-07-13 22:15:45,999 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b4a18ea8d84755db0befaf862f1698a9, REOPEN/MOVE in 763 msec 2023-07-13 22:15:46,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-13 22:15:46,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to default 2023-07-13 22:15:46,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:46,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:46,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:46,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:46,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:46,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:46,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:46,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:46,275 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:15:46,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-13 22:15:46,281 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:46,282 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:46,283 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:46,283 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:46,290 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:15:46,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:15:46,298 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,298 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,298 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,299 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,299 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 empty. 2023-07-13 22:15:46,299 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e empty. 2023-07-13 22:15:46,306 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 empty. 2023-07-13 22:15:46,306 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,307 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,307 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,308 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 empty. 2023-07-13 22:15:46,308 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d empty. 2023-07-13 22:15:46,308 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,309 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,309 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,309 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 22:15:46,350 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:46,353 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 17ad373bac3ecfe75a60a424a165f42e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:46,354 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8e0f8aa9715aebf8be99593740ebc2d9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:46,354 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => a58e66976d6cc729daf7d8036b8e1184, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:46,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:15:46,434 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,436 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing a58e66976d6cc729daf7d8036b8e1184, disabling compactions & flushes 2023-07-13 22:15:46,436 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,437 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,437 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. after waiting 0 ms 2023-07-13 22:15:46,437 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,437 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,437 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for a58e66976d6cc729daf7d8036b8e1184: 2023-07-13 22:15:46,437 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => c1bb15c2e552f631646e6202148d91c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:46,438 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,443 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8e0f8aa9715aebf8be99593740ebc2d9, disabling compactions & flushes 2023-07-13 22:15:46,444 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 17ad373bac3ecfe75a60a424a165f42e, disabling compactions & flushes 2023-07-13 22:15:46,444 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. after waiting 0 ms 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,444 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 17ad373bac3ecfe75a60a424a165f42e: 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. after waiting 0 ms 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,444 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8e0f8aa9715aebf8be99593740ebc2d9: 2023-07-13 22:15:46,445 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ba20775a194a410cd841460231e4759d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:46,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing c1bb15c2e552f631646e6202148d91c5, disabling compactions & flushes 2023-07-13 22:15:46,506 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. after waiting 0 ms 2023-07-13 22:15:46,507 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,507 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for c1bb15c2e552f631646e6202148d91c5: 2023-07-13 22:15:46,507 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ba20775a194a410cd841460231e4759d, disabling compactions & flushes 2023-07-13 22:15:46,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. after waiting 0 ms 2023-07-13 22:15:46,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ba20775a194a410cd841460231e4759d: 2023-07-13 22:15:46,512 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:15:46,514 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546513"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286546513"}]},"ts":"1689286546513"} 2023-07-13 22:15:46,514 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546513"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286546513"}]},"ts":"1689286546513"} 2023-07-13 22:15:46,514 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286546513"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286546513"}]},"ts":"1689286546513"} 2023-07-13 22:15:46,514 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546513"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286546513"}]},"ts":"1689286546513"} 2023-07-13 22:15:46,514 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286546513"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286546513"}]},"ts":"1689286546513"} 2023-07-13 22:15:46,559 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 22:15:46,561 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:15:46,561 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286546561"}]},"ts":"1689286546561"} 2023-07-13 22:15:46,563 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-13 22:15:46,573 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:46,573 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:46,573 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:46,573 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:46,574 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, ASSIGN}] 2023-07-13 22:15:46,577 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, ASSIGN 2023-07-13 22:15:46,577 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, ASSIGN 2023-07-13 22:15:46,578 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, ASSIGN 2023-07-13 22:15:46,578 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, ASSIGN 2023-07-13 22:15:46,581 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, ASSIGN 2023-07-13 22:15:46,581 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:15:46,581 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:15:46,581 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:46,581 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:46,583 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:46,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:15:46,731 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 22:15:46,735 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:46,735 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:46,735 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:46,735 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:46,735 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:46,735 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546735"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286546735"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286546735"}]},"ts":"1689286546735"} 2023-07-13 22:15:46,735 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286546734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286546734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286546734"}]},"ts":"1689286546734"} 2023-07-13 22:15:46,735 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546735"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286546735"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286546735"}]},"ts":"1689286546735"} 2023-07-13 22:15:46,735 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286546734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286546734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286546734"}]},"ts":"1689286546734"} 2023-07-13 22:15:46,735 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286546734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286546734"}]},"ts":"1689286546734"} 2023-07-13 22:15:46,738 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:46,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=18, state=RUNNABLE; OpenRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:46,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=19, state=RUNNABLE; OpenRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:46,745 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=20, state=RUNNABLE; OpenRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:46,747 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=17, state=RUNNABLE; OpenRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:46,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8e0f8aa9715aebf8be99593740ebc2d9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 22:15:46,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17ad373bac3ecfe75a60a424a165f42e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 22:15:46,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,903 INFO [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,906 DEBUG [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/f 2023-07-13 22:15:46,906 DEBUG [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/f 2023-07-13 22:15:46,906 INFO [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8e0f8aa9715aebf8be99593740ebc2d9 columnFamilyName f 2023-07-13 22:15:46,906 INFO [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,907 INFO [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] regionserver.HStore(310): Store=8e0f8aa9715aebf8be99593740ebc2d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:46,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,909 DEBUG [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/f 2023-07-13 22:15:46,910 DEBUG [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/f 2023-07-13 22:15:46,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,910 INFO [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17ad373bac3ecfe75a60a424a165f42e columnFamilyName f 2023-07-13 22:15:46,911 INFO [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] regionserver.HStore(310): Store=17ad373bac3ecfe75a60a424a165f42e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:46,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:15:46,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:46,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:46,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:46,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8e0f8aa9715aebf8be99593740ebc2d9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10315446880, jitterRate=-0.039299145340919495}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:46,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8e0f8aa9715aebf8be99593740ebc2d9: 2023-07-13 22:15:46,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9., pid=21, masterSystemTime=1689286546891 2023-07-13 22:15:46,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:46,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17ad373bac3ecfe75a60a424a165f42e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10930431680, jitterRate=0.01797577738761902}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:46,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17ad373bac3ecfe75a60a424a165f42e: 2023-07-13 22:15:46,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e., pid=22, masterSystemTime=1689286546894 2023-07-13 22:15:46,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:46,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a58e66976d6cc729daf7d8036b8e1184, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 22:15:46,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,930 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:46,930 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286546930"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286546930"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286546930"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286546930"}]},"ts":"1689286546930"} 2023-07-13 22:15:46,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:46,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c1bb15c2e552f631646e6202148d91c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 22:15:46,932 INFO [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,933 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:46,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,933 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546933"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286546933"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286546933"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286546933"}]},"ts":"1689286546933"} 2023-07-13 22:15:46,935 DEBUG [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/f 2023-07-13 22:15:46,935 DEBUG [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/f 2023-07-13 22:15:46,936 INFO [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a58e66976d6cc729daf7d8036b8e1184 columnFamilyName f 2023-07-13 22:15:46,937 INFO [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] regionserver.HStore(310): Store=a58e66976d6cc729daf7d8036b8e1184/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:46,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-13 22:15:46,938 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,43571,1689286544760 in 195 msec 2023-07-13 22:15:46,938 INFO [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,941 DEBUG [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/f 2023-07-13 22:15:46,941 DEBUG [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/f 2023-07-13 22:15:46,942 INFO [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c1bb15c2e552f631646e6202148d91c5 columnFamilyName f 2023-07-13 22:15:46,943 INFO [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] regionserver.HStore(310): Store=c1bb15c2e552f631646e6202148d91c5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:46,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, ASSIGN in 364 msec 2023-07-13 22:15:46,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:46,947 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=18 2023-07-13 22:15:46,947 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; OpenRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39325,1689286540864 in 196 msec 2023-07-13 22:15:46,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, ASSIGN in 373 msec 2023-07-13 22:15:46,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:46,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:46,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a58e66976d6cc729daf7d8036b8e1184; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10775637920, jitterRate=0.003559485077857971}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:46,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a58e66976d6cc729daf7d8036b8e1184: 2023-07-13 22:15:46,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184., pid=25, masterSystemTime=1689286546891 2023-07-13 22:15:46,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:46,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,955 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:46,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c1bb15c2e552f631646e6202148d91c5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10361757280, jitterRate=-0.03498615324497223}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:46,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c1bb15c2e552f631646e6202148d91c5: 2023-07-13 22:15:46,956 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546955"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286546955"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286546955"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286546955"}]},"ts":"1689286546955"} 2023-07-13 22:15:46,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:46,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5., pid=23, masterSystemTime=1689286546894 2023-07-13 22:15:46,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba20775a194a410cd841460231e4759d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 22:15:46,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:46,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,960 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:46,961 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:46,961 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286546961"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286546961"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286546961"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286546961"}]},"ts":"1689286546961"} 2023-07-13 22:15:46,964 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=17 2023-07-13 22:15:46,964 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=17, state=SUCCESS; OpenRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,43571,1689286544760 in 213 msec 2023-07-13 22:15:46,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, ASSIGN in 390 msec 2023-07-13 22:15:46,970 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, ASSIGN in 394 msec 2023-07-13 22:15:46,968 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=19 2023-07-13 22:15:46,971 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=19, state=SUCCESS; OpenRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39325,1689286540864 in 221 msec 2023-07-13 22:15:46,972 INFO [StoreOpener-ba20775a194a410cd841460231e4759d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,975 DEBUG [StoreOpener-ba20775a194a410cd841460231e4759d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/f 2023-07-13 22:15:46,975 DEBUG [StoreOpener-ba20775a194a410cd841460231e4759d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/f 2023-07-13 22:15:46,976 INFO [StoreOpener-ba20775a194a410cd841460231e4759d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba20775a194a410cd841460231e4759d columnFamilyName f 2023-07-13 22:15:46,977 INFO [StoreOpener-ba20775a194a410cd841460231e4759d-1] regionserver.HStore(310): Store=ba20775a194a410cd841460231e4759d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:46,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba20775a194a410cd841460231e4759d 2023-07-13 22:15:46,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:46,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba20775a194a410cd841460231e4759d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10931669280, jitterRate=0.01809103786945343}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:46,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba20775a194a410cd841460231e4759d: 2023-07-13 22:15:46,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d., pid=24, masterSystemTime=1689286546891 2023-07-13 22:15:46,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:46,991 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:46,991 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286546991"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286546991"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286546991"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286546991"}]},"ts":"1689286546991"} 2023-07-13 22:15:46,997 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=20 2023-07-13 22:15:46,997 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=20, state=SUCCESS; OpenRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,43571,1689286544760 in 248 msec 2023-07-13 22:15:47,000 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=15 2023-07-13 22:15:47,000 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, ASSIGN in 423 msec 2023-07-13 22:15:47,001 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:15:47,003 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286547003"}]},"ts":"1689286547003"} 2023-07-13 22:15:47,005 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-13 22:15:47,009 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:15:47,012 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 749 msec 2023-07-13 22:15:47,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:15:47,420 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-13 22:15:47,421 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-13 22:15:47,421 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:47,428 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-13 22:15:47,429 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:47,429 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-13 22:15:47,429 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:47,434 DEBUG [Listener at localhost/39613] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:47,437 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33764, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:47,440 DEBUG [Listener at localhost/39613] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:47,443 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:47,443 DEBUG [Listener at localhost/39613] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:47,445 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:47,447 DEBUG [Listener at localhost/39613] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:47,449 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38900, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:47,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:47,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:15:47,461 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:47,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:47,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 8e0f8aa9715aebf8be99593740ebc2d9 to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:47,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:47,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:47,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:47,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:47,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, REOPEN/MOVE 2023-07-13 22:15:47,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region a58e66976d6cc729daf7d8036b8e1184 to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,480 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, REOPEN/MOVE 2023-07-13 22:15:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:47,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:47,482 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:47,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, REOPEN/MOVE 2023-07-13 22:15:47,482 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286547482"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547482"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547482"}]},"ts":"1689286547482"} 2023-07-13 22:15:47,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 17ad373bac3ecfe75a60a424a165f42e to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:47,490 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, REOPEN/MOVE 2023-07-13 22:15:47,491 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:47,492 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:47,492 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547492"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547492"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547492"}]},"ts":"1689286547492"} 2023-07-13 22:15:47,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, REOPEN/MOVE 2023-07-13 22:15:47,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region c1bb15c2e552f631646e6202148d91c5 to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,495 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, REOPEN/MOVE 2023-07-13 22:15:47,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:47,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:47,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:47,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:47,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:47,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:47,497 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:47,498 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547497"}]},"ts":"1689286547497"} 2023-07-13 22:15:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, REOPEN/MOVE 2023-07-13 22:15:47,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region ba20775a194a410cd841460231e4759d to RSGroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:47,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:47,505 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, REOPEN/MOVE 2023-07-13 22:15:47,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:47,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:47,507 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=28, state=RUNNABLE; CloseRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:47,507 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:47,507 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547507"}]},"ts":"1689286547507"} 2023-07-13 22:15:47,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:47,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:47,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:47,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, REOPEN/MOVE 2023-07-13 22:15:47,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1384789718, current retry=0 2023-07-13 22:15:47,511 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, REOPEN/MOVE 2023-07-13 22:15:47,513 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:47,513 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286547512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547512"}]},"ts":"1689286547512"} 2023-07-13 22:15:47,515 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE; CloseRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:47,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:47,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8e0f8aa9715aebf8be99593740ebc2d9, disabling compactions & flushes 2023-07-13 22:15:47,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:47,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:47,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. after waiting 0 ms 2023-07-13 22:15:47,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:47,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:47,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:47,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17ad373bac3ecfe75a60a424a165f42e, disabling compactions & flushes 2023-07-13 22:15:47,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:47,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:47,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. after waiting 0 ms 2023-07-13 22:15:47,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:47,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:47,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8e0f8aa9715aebf8be99593740ebc2d9: 2023-07-13 22:15:47,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8e0f8aa9715aebf8be99593740ebc2d9 move to jenkins-hbase4.apache.org,38543,1689286541242 record at close sequenceid=2 2023-07-13 22:15:47,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:47,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba20775a194a410cd841460231e4759d 2023-07-13 22:15:47,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba20775a194a410cd841460231e4759d, disabling compactions & flushes 2023-07-13 22:15:47,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:47,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:47,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. after waiting 0 ms 2023-07-13 22:15:47,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:47,679 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=CLOSED 2023-07-13 22:15:47,679 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286547679"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286547679"}]},"ts":"1689286547679"} 2023-07-13 22:15:47,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:47,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:47,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17ad373bac3ecfe75a60a424a165f42e: 2023-07-13 22:15:47,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 17ad373bac3ecfe75a60a424a165f42e move to jenkins-hbase4.apache.org,39109,1689286541053 record at close sequenceid=2 2023-07-13 22:15:47,690 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-13 22:15:47,691 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,43571,1689286544760 in 191 msec 2023-07-13 22:15:47,692 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:47,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:47,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:47,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c1bb15c2e552f631646e6202148d91c5, disabling compactions & flushes 2023-07-13 22:15:47,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:47,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:47,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. after waiting 0 ms 2023-07-13 22:15:47,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:47,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:47,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:47,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba20775a194a410cd841460231e4759d: 2023-07-13 22:15:47,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ba20775a194a410cd841460231e4759d move to jenkins-hbase4.apache.org,39109,1689286541053 record at close sequenceid=2 2023-07-13 22:15:47,702 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=CLOSED 2023-07-13 22:15:47,703 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547702"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286547702"}]},"ts":"1689286547702"} 2023-07-13 22:15:47,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba20775a194a410cd841460231e4759d 2023-07-13 22:15:47,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:47,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a58e66976d6cc729daf7d8036b8e1184, disabling compactions & flushes 2023-07-13 22:15:47,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:47,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:47,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. after waiting 0 ms 2023-07-13 22:15:47,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:47,711 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=CLOSED 2023-07-13 22:15:47,711 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286547711"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286547711"}]},"ts":"1689286547711"} 2023-07-13 22:15:47,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=28 2023-07-13 22:15:47,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=28, state=SUCCESS; CloseRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39325,1689286540864 in 206 msec 2023-07-13 22:15:47,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=33 2023-07-13 22:15:47,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=33, state=SUCCESS; CloseRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,43571,1689286544760 in 200 msec 2023-07-13 22:15:47,721 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:47,723 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:47,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:47,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:47,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c1bb15c2e552f631646e6202148d91c5: 2023-07-13 22:15:47,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c1bb15c2e552f631646e6202148d91c5 move to jenkins-hbase4.apache.org,39109,1689286541053 record at close sequenceid=2 2023-07-13 22:15:47,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:47,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:47,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:47,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a58e66976d6cc729daf7d8036b8e1184: 2023-07-13 22:15:47,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a58e66976d6cc729daf7d8036b8e1184 move to jenkins-hbase4.apache.org,38543,1689286541242 record at close sequenceid=2 2023-07-13 22:15:47,742 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=CLOSED 2023-07-13 22:15:47,742 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547742"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286547742"}]},"ts":"1689286547742"} 2023-07-13 22:15:47,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:47,754 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=CLOSED 2023-07-13 22:15:47,755 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286547754"}]},"ts":"1689286547754"} 2023-07-13 22:15:47,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-13 22:15:47,760 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39325,1689286540864 in 240 msec 2023-07-13 22:15:47,762 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:47,763 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-13 22:15:47,763 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,43571,1689286544760 in 261 msec 2023-07-13 22:15:47,765 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:47,844 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 22:15:47,845 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:47,845 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:47,845 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547844"}]},"ts":"1689286547844"} 2023-07-13 22:15:47,845 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286547844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547844"}]},"ts":"1689286547844"} 2023-07-13 22:15:47,845 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:47,845 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:47,845 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547845"}]},"ts":"1689286547845"} 2023-07-13 22:15:47,845 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:47,846 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286547845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547845"}]},"ts":"1689286547845"} 2023-07-13 22:15:47,846 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286547845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286547845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286547845"}]},"ts":"1689286547845"} 2023-07-13 22:15:47,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=30, state=RUNNABLE; OpenRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:47,855 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; OpenRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:47,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=28, state=RUNNABLE; OpenRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:47,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=27, state=RUNNABLE; OpenRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:47,864 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=26, state=RUNNABLE; OpenRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:48,007 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,007 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:15:48,011 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60082, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:15:48,017 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17ad373bac3ecfe75a60a424a165f42e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 22:15:48,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:48,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,020 INFO [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8e0f8aa9715aebf8be99593740ebc2d9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 22:15:48,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:48,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,027 DEBUG [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/f 2023-07-13 22:15:48,027 DEBUG [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/f 2023-07-13 22:15:48,028 INFO [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17ad373bac3ecfe75a60a424a165f42e columnFamilyName f 2023-07-13 22:15:48,029 INFO [StoreOpener-17ad373bac3ecfe75a60a424a165f42e-1] regionserver.HStore(310): Store=17ad373bac3ecfe75a60a424a165f42e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:48,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,038 INFO [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,039 DEBUG [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/f 2023-07-13 22:15:48,040 DEBUG [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/f 2023-07-13 22:15:48,040 INFO [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8e0f8aa9715aebf8be99593740ebc2d9 columnFamilyName f 2023-07-13 22:15:48,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,042 INFO [StoreOpener-8e0f8aa9715aebf8be99593740ebc2d9-1] regionserver.HStore(310): Store=8e0f8aa9715aebf8be99593740ebc2d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:48,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,046 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17ad373bac3ecfe75a60a424a165f42e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10143897280, jitterRate=-0.055275946855545044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:48,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17ad373bac3ecfe75a60a424a165f42e: 2023-07-13 22:15:48,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e., pid=38, masterSystemTime=1689286548007 2023-07-13 22:15:48,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,053 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8e0f8aa9715aebf8be99593740ebc2d9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10563854080, jitterRate=-0.016164422035217285}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:48,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8e0f8aa9715aebf8be99593740ebc2d9: 2023-07-13 22:15:48,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba20775a194a410cd841460231e4759d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 22:15:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9., pid=40, masterSystemTime=1689286548016 2023-07-13 22:15:48,067 INFO [StoreOpener-ba20775a194a410cd841460231e4759d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,067 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,068 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548067"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286548067"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286548067"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286548067"}]},"ts":"1689286548067"} 2023-07-13 22:15:48,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a58e66976d6cc729daf7d8036b8e1184, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 22:15:48,071 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:48,071 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286548071"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286548071"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286548071"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286548071"}]},"ts":"1689286548071"} 2023-07-13 22:15:48,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:48,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,072 DEBUG [StoreOpener-ba20775a194a410cd841460231e4759d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/f 2023-07-13 22:15:48,073 DEBUG [StoreOpener-ba20775a194a410cd841460231e4759d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/f 2023-07-13 22:15:48,076 INFO [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,077 DEBUG [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/f 2023-07-13 22:15:48,078 DEBUG [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/f 2023-07-13 22:15:48,078 INFO [StoreOpener-ba20775a194a410cd841460231e4759d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba20775a194a410cd841460231e4759d columnFamilyName f 2023-07-13 22:15:48,078 INFO [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a58e66976d6cc729daf7d8036b8e1184 columnFamilyName f 2023-07-13 22:15:48,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=28 2023-07-13 22:15:48,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=28, state=SUCCESS; OpenRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39109,1689286541053 in 216 msec 2023-07-13 22:15:48,081 INFO [StoreOpener-ba20775a194a410cd841460231e4759d-1] regionserver.HStore(310): Store=ba20775a194a410cd841460231e4759d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:48,081 INFO [StoreOpener-a58e66976d6cc729daf7d8036b8e1184-1] regionserver.HStore(310): Store=a58e66976d6cc729daf7d8036b8e1184/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:48,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=26 2023-07-13 22:15:48,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=26, state=SUCCESS; OpenRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,38543,1689286541242 in 212 msec 2023-07-13 22:15:48,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, REOPEN/MOVE in 598 msec 2023-07-13 22:15:48,086 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, REOPEN/MOVE in 604 msec 2023-07-13 22:15:48,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a58e66976d6cc729daf7d8036b8e1184; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9620778240, jitterRate=-0.10399520397186279}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:48,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a58e66976d6cc729daf7d8036b8e1184: 2023-07-13 22:15:48,092 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184., pid=39, masterSystemTime=1689286548016 2023-07-13 22:15:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,094 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,095 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:48,095 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548094"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286548094"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286548094"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286548094"}]},"ts":"1689286548094"} 2023-07-13 22:15:48,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=27 2023-07-13 22:15:48,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=27, state=SUCCESS; OpenRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,38543,1689286541242 in 234 msec 2023-07-13 22:15:48,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,105 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba20775a194a410cd841460231e4759d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11477579360, jitterRate=0.06893287599086761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:48,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba20775a194a410cd841460231e4759d: 2023-07-13 22:15:48,106 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, REOPEN/MOVE in 621 msec 2023-07-13 22:15:48,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d., pid=37, masterSystemTime=1689286548007 2023-07-13 22:15:48,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,109 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,109 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c1bb15c2e552f631646e6202148d91c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 22:15:48,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:48,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,115 INFO [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,116 DEBUG [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/f 2023-07-13 22:15:48,116 DEBUG [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/f 2023-07-13 22:15:48,116 INFO [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c1bb15c2e552f631646e6202148d91c5 columnFamilyName f 2023-07-13 22:15:48,117 INFO [StoreOpener-c1bb15c2e552f631646e6202148d91c5-1] regionserver.HStore(310): Store=c1bb15c2e552f631646e6202148d91c5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:48,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,119 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,119 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286548119"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286548119"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286548119"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286548119"}]},"ts":"1689286548119"} 2023-07-13 22:15:48,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-13 22:15:48,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; OpenRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,39109,1689286541053 in 267 msec 2023-07-13 22:15:48,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c1bb15c2e552f631646e6202148d91c5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11586284480, jitterRate=0.07905682921409607}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:48,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c1bb15c2e552f631646e6202148d91c5: 2023-07-13 22:15:48,137 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5., pid=36, masterSystemTime=1689286548007 2023-07-13 22:15:48,147 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, REOPEN/MOVE in 620 msec 2023-07-13 22:15:48,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,150 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,150 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548150"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286548150"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286548150"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286548150"}]},"ts":"1689286548150"} 2023-07-13 22:15:48,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=30 2023-07-13 22:15:48,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=30, state=SUCCESS; OpenRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39109,1689286541053 in 302 msec 2023-07-13 22:15:48,162 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, REOPEN/MOVE in 664 msec 2023-07-13 22:15:48,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-13 22:15:48,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1384789718. 2023-07-13 22:15:48,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:48,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:48,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:48,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:48,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:15:48,522 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:48,529 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:48,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:48,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:48,554 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286548554"}]},"ts":"1689286548554"} 2023-07-13 22:15:48,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 22:15:48,561 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-13 22:15:48,565 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-13 22:15:48,585 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, UNASSIGN}] 2023-07-13 22:15:48,588 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, UNASSIGN 2023-07-13 22:15:48,591 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,591 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286548591"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286548591"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286548591"}]},"ts":"1689286548591"} 2023-07-13 22:15:48,591 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, UNASSIGN 2023-07-13 22:15:48,591 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, UNASSIGN 2023-07-13 22:15:48,592 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, UNASSIGN 2023-07-13 22:15:48,592 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, UNASSIGN 2023-07-13 22:15:48,595 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,595 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:48,595 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:48,595 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548595"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286548595"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286548595"}]},"ts":"1689286548595"} 2023-07-13 22:15:48,595 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:48,595 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286548595"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286548595"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286548595"}]},"ts":"1689286548595"} 2023-07-13 22:15:48,595 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548595"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286548595"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286548595"}]},"ts":"1689286548595"} 2023-07-13 22:15:48,595 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=46, state=RUNNABLE; CloseRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:48,595 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548595"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286548595"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286548595"}]},"ts":"1689286548595"} 2023-07-13 22:15:48,598 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:48,606 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=42, state=RUNNABLE; CloseRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:48,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=44, state=RUNNABLE; CloseRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:48,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=45, state=RUNNABLE; CloseRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:48,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 22:15:48,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba20775a194a410cd841460231e4759d, disabling compactions & flushes 2023-07-13 22:15:48,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. after waiting 0 ms 2023-07-13 22:15:48,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a58e66976d6cc729daf7d8036b8e1184, disabling compactions & flushes 2023-07-13 22:15:48,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. after waiting 0 ms 2023-07-13 22:15:48,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:48,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d. 2023-07-13 22:15:48,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba20775a194a410cd841460231e4759d: 2023-07-13 22:15:48,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba20775a194a410cd841460231e4759d 2023-07-13 22:15:48,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17ad373bac3ecfe75a60a424a165f42e, disabling compactions & flushes 2023-07-13 22:15:48,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. after waiting 0 ms 2023-07-13 22:15:48,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,777 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=ba20775a194a410cd841460231e4759d, regionState=CLOSED 2023-07-13 22:15:48,777 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286548777"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286548777"}]},"ts":"1689286548777"} 2023-07-13 22:15:48,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:48,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184. 2023-07-13 22:15:48,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a58e66976d6cc729daf7d8036b8e1184: 2023-07-13 22:15:48,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:48,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,796 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=46 2023-07-13 22:15:48,796 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=46, state=SUCCESS; CloseRegionProcedure ba20775a194a410cd841460231e4759d, server=jenkins-hbase4.apache.org,39109,1689286541053 in 189 msec 2023-07-13 22:15:48,796 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=a58e66976d6cc729daf7d8036b8e1184, regionState=CLOSED 2023-07-13 22:15:48,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8e0f8aa9715aebf8be99593740ebc2d9, disabling compactions & flushes 2023-07-13 22:15:48,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. after waiting 0 ms 2023-07-13 22:15:48,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,797 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548796"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286548796"}]},"ts":"1689286548796"} 2023-07-13 22:15:48,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:48,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e. 2023-07-13 22:15:48,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17ad373bac3ecfe75a60a424a165f42e: 2023-07-13 22:15:48,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ba20775a194a410cd841460231e4759d, UNASSIGN in 211 msec 2023-07-13 22:15:48,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:48,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c1bb15c2e552f631646e6202148d91c5, disabling compactions & flushes 2023-07-13 22:15:48,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. after waiting 0 ms 2023-07-13 22:15:48,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,809 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=17ad373bac3ecfe75a60a424a165f42e, regionState=CLOSED 2023-07-13 22:15:48,810 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548809"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286548809"}]},"ts":"1689286548809"} 2023-07-13 22:15:48,810 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-13 22:15:48,810 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure a58e66976d6cc729daf7d8036b8e1184, server=jenkins-hbase4.apache.org,38543,1689286541242 in 202 msec 2023-07-13 22:15:48,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a58e66976d6cc729daf7d8036b8e1184, UNASSIGN in 225 msec 2023-07-13 22:15:48,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:48,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=44 2023-07-13 22:15:48,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=44, state=SUCCESS; CloseRegionProcedure 17ad373bac3ecfe75a60a424a165f42e, server=jenkins-hbase4.apache.org,39109,1689286541053 in 204 msec 2023-07-13 22:15:48,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5. 2023-07-13 22:15:48,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c1bb15c2e552f631646e6202148d91c5: 2023-07-13 22:15:48,837 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ad373bac3ecfe75a60a424a165f42e, UNASSIGN in 248 msec 2023-07-13 22:15:48,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:48,842 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=c1bb15c2e552f631646e6202148d91c5, regionState=CLOSED 2023-07-13 22:15:48,842 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286548842"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286548842"}]},"ts":"1689286548842"} 2023-07-13 22:15:48,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=45 2023-07-13 22:15:48,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; CloseRegionProcedure c1bb15c2e552f631646e6202148d91c5, server=jenkins-hbase4.apache.org,39109,1689286541053 in 236 msec 2023-07-13 22:15:48,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:48,852 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c1bb15c2e552f631646e6202148d91c5, UNASSIGN in 264 msec 2023-07-13 22:15:48,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9. 2023-07-13 22:15:48,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8e0f8aa9715aebf8be99593740ebc2d9: 2023-07-13 22:15:48,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:48,859 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=8e0f8aa9715aebf8be99593740ebc2d9, regionState=CLOSED 2023-07-13 22:15:48,859 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286548859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286548859"}]},"ts":"1689286548859"} 2023-07-13 22:15:48,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 22:15:48,868 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=42 2023-07-13 22:15:48,868 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=42, state=SUCCESS; CloseRegionProcedure 8e0f8aa9715aebf8be99593740ebc2d9, server=jenkins-hbase4.apache.org,38543,1689286541242 in 256 msec 2023-07-13 22:15:48,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=41 2023-07-13 22:15:48,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8e0f8aa9715aebf8be99593740ebc2d9, UNASSIGN in 283 msec 2023-07-13 22:15:48,876 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286548876"}]},"ts":"1689286548876"} 2023-07-13 22:15:48,878 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-13 22:15:48,880 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-13 22:15:48,883 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 344 msec 2023-07-13 22:15:49,164 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 22:15:49,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 22:15:49,170 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-13 22:15:49,171 INFO [Listener at localhost/39613] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:49,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:49,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-13 22:15:49,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 22:15:49,196 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-13 22:15:49,218 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:49,219 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:49,218 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:49,219 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:49,218 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:49,224 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/recovered.edits] 2023-07-13 22:15:49,224 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/recovered.edits] 2023-07-13 22:15:49,224 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/recovered.edits] 2023-07-13 22:15:49,224 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/recovered.edits] 2023-07-13 22:15:49,224 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/recovered.edits] 2023-07-13 22:15:49,247 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184/recovered.edits/7.seqid 2023-07-13 22:15:49,248 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d/recovered.edits/7.seqid 2023-07-13 22:15:49,249 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5/recovered.edits/7.seqid 2023-07-13 22:15:49,250 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9/recovered.edits/7.seqid 2023-07-13 22:15:49,250 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a58e66976d6cc729daf7d8036b8e1184 2023-07-13 22:15:49,250 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e/recovered.edits/7.seqid 2023-07-13 22:15:49,251 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ba20775a194a410cd841460231e4759d 2023-07-13 22:15:49,251 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 22:15:49,251 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c1bb15c2e552f631646e6202148d91c5 2023-07-13 22:15:49,251 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ad373bac3ecfe75a60a424a165f42e 2023-07-13 22:15:49,251 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8e0f8aa9715aebf8be99593740ebc2d9 2023-07-13 22:15:49,252 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 22:15:49,252 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 22:15:49,253 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 22:15:49,254 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-13 22:15:49,254 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:15:49,254 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-13 22:15:49,255 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 22:15:49,255 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-13 22:15:49,292 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-13 22:15:49,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 22:15:49,297 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-13 22:15:49,297 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-13 22:15:49,298 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286549298"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:49,298 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286549298"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:49,298 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286549298"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:49,298 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286549298"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:49,298 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286549298"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:49,310 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 22:15:49,311 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8e0f8aa9715aebf8be99593740ebc2d9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286546257.8e0f8aa9715aebf8be99593740ebc2d9.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => a58e66976d6cc729daf7d8036b8e1184, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286546257.a58e66976d6cc729daf7d8036b8e1184.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 17ad373bac3ecfe75a60a424a165f42e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286546257.17ad373bac3ecfe75a60a424a165f42e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => c1bb15c2e552f631646e6202148d91c5, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286546257.c1bb15c2e552f631646e6202148d91c5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => ba20775a194a410cd841460231e4759d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286546257.ba20775a194a410cd841460231e4759d.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 22:15:49,311 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-13 22:15:49,311 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286549311"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:49,319 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-13 22:15:49,330 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,330 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,330 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,330 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,330 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,331 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 empty. 2023-07-13 22:15:49,331 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 empty. 2023-07-13 22:15:49,331 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 empty. 2023-07-13 22:15:49,332 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 empty. 2023-07-13 22:15:49,332 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 empty. 2023-07-13 22:15:49,332 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,332 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,333 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,333 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,333 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,333 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 22:15:49,397 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:49,407 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => da3ae932fd2f947927cdc24fd85b29e0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:49,408 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => fd8e418349c7fcbc45308496905827b5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:49,410 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 55127ca3adf95b298c91354fbf9b3a53, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:49,490 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,491 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing da3ae932fd2f947927cdc24fd85b29e0, disabling compactions & flushes 2023-07-13 22:15:49,491 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,491 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,491 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. after waiting 0 ms 2023-07-13 22:15:49,491 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,491 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,491 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for da3ae932fd2f947927cdc24fd85b29e0: 2023-07-13 22:15:49,493 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3c0463ab38a0d51ef16a8716f7feb186, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:49,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 22:15:49,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing fd8e418349c7fcbc45308496905827b5, disabling compactions & flushes 2023-07-13 22:15:49,506 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,507 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. after waiting 0 ms 2023-07-13 22:15:49,507 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,507 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for fd8e418349c7fcbc45308496905827b5: 2023-07-13 22:15:49,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => e3c5e880d498035e2142ccaedf2234a8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:49,506 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 55127ca3adf95b298c91354fbf9b3a53, disabling compactions & flushes 2023-07-13 22:15:49,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. after waiting 0 ms 2023-07-13 22:15:49,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 55127ca3adf95b298c91354fbf9b3a53: 2023-07-13 22:15:49,529 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,529 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3c0463ab38a0d51ef16a8716f7feb186, disabling compactions & flushes 2023-07-13 22:15:49,529 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,529 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,529 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. after waiting 0 ms 2023-07-13 22:15:49,529 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,529 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,530 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3c0463ab38a0d51ef16a8716f7feb186: 2023-07-13 22:15:49,535 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,535 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing e3c5e880d498035e2142ccaedf2234a8, disabling compactions & flushes 2023-07-13 22:15:49,535 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,535 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,535 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. after waiting 0 ms 2023-07-13 22:15:49,535 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,535 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,535 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for e3c5e880d498035e2142ccaedf2234a8: 2023-07-13 22:15:49,540 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286549540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286549540"}]},"ts":"1689286549540"} 2023-07-13 22:15:49,540 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286549540"}]},"ts":"1689286549540"} 2023-07-13 22:15:49,540 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286549540"}]},"ts":"1689286549540"} 2023-07-13 22:15:49,540 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286549540"}]},"ts":"1689286549540"} 2023-07-13 22:15:49,540 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286549540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286549540"}]},"ts":"1689286549540"} 2023-07-13 22:15:49,546 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 22:15:49,548 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286549548"}]},"ts":"1689286549548"} 2023-07-13 22:15:49,550 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-13 22:15:49,556 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:49,556 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:49,556 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:49,556 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:49,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, ASSIGN}] 2023-07-13 22:15:49,561 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, ASSIGN 2023-07-13 22:15:49,564 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, ASSIGN 2023-07-13 22:15:49,564 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:49,564 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, ASSIGN 2023-07-13 22:15:49,565 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, ASSIGN 2023-07-13 22:15:49,565 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, ASSIGN 2023-07-13 22:15:49,567 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:49,567 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:49,571 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:49,571 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:49,715 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 22:15:49,719 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=3c0463ab38a0d51ef16a8716f7feb186, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:49,719 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=da3ae932fd2f947927cdc24fd85b29e0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:49,719 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=e3c5e880d498035e2142ccaedf2234a8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:49,719 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=fd8e418349c7fcbc45308496905827b5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:49,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286549719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286549719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286549719"}]},"ts":"1689286549719"} 2023-07-13 22:15:49,719 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286549719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286549719"}]},"ts":"1689286549719"} 2023-07-13 22:15:49,719 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=55127ca3adf95b298c91354fbf9b3a53, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:49,719 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286549719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286549719"}]},"ts":"1689286549719"} 2023-07-13 22:15:49,719 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286549719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286549719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286549719"}]},"ts":"1689286549719"} 2023-07-13 22:15:49,719 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286549719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286549719"}]},"ts":"1689286549719"} 2023-07-13 22:15:49,722 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=53, state=RUNNABLE; OpenRegionProcedure da3ae932fd2f947927cdc24fd85b29e0, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:49,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure fd8e418349c7fcbc45308496905827b5, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:49,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=55, state=RUNNABLE; OpenRegionProcedure 55127ca3adf95b298c91354fbf9b3a53, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:49,726 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=57, state=RUNNABLE; OpenRegionProcedure e3c5e880d498035e2142ccaedf2234a8, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:49,727 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure 3c0463ab38a0d51ef16a8716f7feb186, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:49,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 22:15:49,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3c5e880d498035e2142ccaedf2234a8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 22:15:49,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,881 INFO [StoreOpener-e3c5e880d498035e2142ccaedf2234a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd8e418349c7fcbc45308496905827b5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 22:15:49,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,882 DEBUG [StoreOpener-e3c5e880d498035e2142ccaedf2234a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/f 2023-07-13 22:15:49,882 DEBUG [StoreOpener-e3c5e880d498035e2142ccaedf2234a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/f 2023-07-13 22:15:49,883 INFO [StoreOpener-e3c5e880d498035e2142ccaedf2234a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3c5e880d498035e2142ccaedf2234a8 columnFamilyName f 2023-07-13 22:15:49,883 INFO [StoreOpener-fd8e418349c7fcbc45308496905827b5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,884 INFO [StoreOpener-e3c5e880d498035e2142ccaedf2234a8-1] regionserver.HStore(310): Store=e3c5e880d498035e2142ccaedf2234a8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:49,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,885 DEBUG [StoreOpener-fd8e418349c7fcbc45308496905827b5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/f 2023-07-13 22:15:49,885 DEBUG [StoreOpener-fd8e418349c7fcbc45308496905827b5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/f 2023-07-13 22:15:49,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,885 INFO [StoreOpener-fd8e418349c7fcbc45308496905827b5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd8e418349c7fcbc45308496905827b5 columnFamilyName f 2023-07-13 22:15:49,886 INFO [StoreOpener-fd8e418349c7fcbc45308496905827b5-1] regionserver.HStore(310): Store=fd8e418349c7fcbc45308496905827b5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:49,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:49,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:49,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:49,904 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e3c5e880d498035e2142ccaedf2234a8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11131186560, jitterRate=0.03667253255844116}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e3c5e880d498035e2142ccaedf2234a8: 2023-07-13 22:15:49,904 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fd8e418349c7fcbc45308496905827b5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9785390400, jitterRate=-0.08866450190544128}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:49,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fd8e418349c7fcbc45308496905827b5: 2023-07-13 22:15:49,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8., pid=61, masterSystemTime=1689286549874 2023-07-13 22:15:49,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5., pid=59, masterSystemTime=1689286549877 2023-07-13 22:15:49,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,910 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:49,910 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3c0463ab38a0d51ef16a8716f7feb186, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 22:15:49,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:49,915 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=fd8e418349c7fcbc45308496905827b5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:49,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da3ae932fd2f947927cdc24fd85b29e0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 22:15:49,915 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549915"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286549915"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286549915"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286549915"}]},"ts":"1689286549915"} 2023-07-13 22:15:49,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,916 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=e3c5e880d498035e2142ccaedf2234a8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:49,916 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286549916"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286549916"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286549916"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286549916"}]},"ts":"1689286549916"} 2023-07-13 22:15:49,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-13 22:15:49,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure fd8e418349c7fcbc45308496905827b5, server=jenkins-hbase4.apache.org,38543,1689286541242 in 195 msec 2023-07-13 22:15:49,923 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=57 2023-07-13 22:15:49,923 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=57, state=SUCCESS; OpenRegionProcedure e3c5e880d498035e2142ccaedf2234a8, server=jenkins-hbase4.apache.org,39109,1689286541053 in 193 msec 2023-07-13 22:15:49,924 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, ASSIGN in 362 msec 2023-07-13 22:15:49,924 INFO [StoreOpener-da3ae932fd2f947927cdc24fd85b29e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,925 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, ASSIGN in 364 msec 2023-07-13 22:15:49,927 DEBUG [StoreOpener-da3ae932fd2f947927cdc24fd85b29e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/f 2023-07-13 22:15:49,927 DEBUG [StoreOpener-da3ae932fd2f947927cdc24fd85b29e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/f 2023-07-13 22:15:49,927 INFO [StoreOpener-da3ae932fd2f947927cdc24fd85b29e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da3ae932fd2f947927cdc24fd85b29e0 columnFamilyName f 2023-07-13 22:15:49,928 INFO [StoreOpener-da3ae932fd2f947927cdc24fd85b29e0-1] regionserver.HStore(310): Store=da3ae932fd2f947927cdc24fd85b29e0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:49,929 INFO [StoreOpener-3c0463ab38a0d51ef16a8716f7feb186-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,932 DEBUG [StoreOpener-3c0463ab38a0d51ef16a8716f7feb186-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/f 2023-07-13 22:15:49,932 DEBUG [StoreOpener-3c0463ab38a0d51ef16a8716f7feb186-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/f 2023-07-13 22:15:49,932 INFO [StoreOpener-3c0463ab38a0d51ef16a8716f7feb186-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3c0463ab38a0d51ef16a8716f7feb186 columnFamilyName f 2023-07-13 22:15:49,933 INFO [StoreOpener-3c0463ab38a0d51ef16a8716f7feb186-1] regionserver.HStore(310): Store=3c0463ab38a0d51ef16a8716f7feb186/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:49,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:49,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:49,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da3ae932fd2f947927cdc24fd85b29e0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11211255200, jitterRate=0.04412950575351715}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:49,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da3ae932fd2f947927cdc24fd85b29e0: 2023-07-13 22:15:49,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:49,942 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3c0463ab38a0d51ef16a8716f7feb186; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11823745600, jitterRate=0.10117211937904358}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3c0463ab38a0d51ef16a8716f7feb186: 2023-07-13 22:15:49,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0., pid=58, masterSystemTime=1689286549874 2023-07-13 22:15:49,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186., pid=62, masterSystemTime=1689286549877 2023-07-13 22:15:49,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:49,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55127ca3adf95b298c91354fbf9b3a53, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 22:15:49,955 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=da3ae932fd2f947927cdc24fd85b29e0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:49,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:49,955 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286549955"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286549955"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286549955"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286549955"}]},"ts":"1689286549955"} 2023-07-13 22:15:49,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:49,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,958 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=3c0463ab38a0d51ef16a8716f7feb186, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:49,958 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549958"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286549958"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286549958"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286549958"}]},"ts":"1689286549958"} 2023-07-13 22:15:49,959 INFO [StoreOpener-55127ca3adf95b298c91354fbf9b3a53-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=53 2023-07-13 22:15:49,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=53, state=SUCCESS; OpenRegionProcedure da3ae932fd2f947927cdc24fd85b29e0, server=jenkins-hbase4.apache.org,39109,1689286541053 in 237 msec 2023-07-13 22:15:49,963 DEBUG [StoreOpener-55127ca3adf95b298c91354fbf9b3a53-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/f 2023-07-13 22:15:49,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-13 22:15:49,963 DEBUG [StoreOpener-55127ca3adf95b298c91354fbf9b3a53-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/f 2023-07-13 22:15:49,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure 3c0463ab38a0d51ef16a8716f7feb186, server=jenkins-hbase4.apache.org,38543,1689286541242 in 233 msec 2023-07-13 22:15:49,964 INFO [StoreOpener-55127ca3adf95b298c91354fbf9b3a53-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55127ca3adf95b298c91354fbf9b3a53 columnFamilyName f 2023-07-13 22:15:49,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, ASSIGN in 406 msec 2023-07-13 22:15:49,965 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, ASSIGN in 404 msec 2023-07-13 22:15:49,965 INFO [StoreOpener-55127ca3adf95b298c91354fbf9b3a53-1] regionserver.HStore(310): Store=55127ca3adf95b298c91354fbf9b3a53/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:49,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:49,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:49,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55127ca3adf95b298c91354fbf9b3a53; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10247172480, jitterRate=-0.0456576943397522}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:49,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55127ca3adf95b298c91354fbf9b3a53: 2023-07-13 22:15:49,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53., pid=60, masterSystemTime=1689286549874 2023-07-13 22:15:49,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,978 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:49,979 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=55127ca3adf95b298c91354fbf9b3a53, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:49,979 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286549979"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286549979"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286549979"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286549979"}]},"ts":"1689286549979"} 2023-07-13 22:15:49,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-13 22:15:49,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; OpenRegionProcedure 55127ca3adf95b298c91354fbf9b3a53, server=jenkins-hbase4.apache.org,39109,1689286541053 in 256 msec 2023-07-13 22:15:49,987 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-13 22:15:49,987 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, ASSIGN in 425 msec 2023-07-13 22:15:49,987 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286549987"}]},"ts":"1689286549987"} 2023-07-13 22:15:49,989 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-13 22:15:49,991 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-13 22:15:49,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 811 msec 2023-07-13 22:15:50,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 22:15:50,300 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-13 22:15:50,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:50,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:50,303 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-13 22:15:50,310 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286550310"}]},"ts":"1689286550310"} 2023-07-13 22:15:50,311 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-13 22:15:50,317 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-13 22:15:50,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, UNASSIGN}] 2023-07-13 22:15:50,320 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, UNASSIGN 2023-07-13 22:15:50,321 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, UNASSIGN 2023-07-13 22:15:50,321 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, UNASSIGN 2023-07-13 22:15:50,321 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, UNASSIGN 2023-07-13 22:15:50,321 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, UNASSIGN 2023-07-13 22:15:50,322 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=da3ae932fd2f947927cdc24fd85b29e0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:50,322 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286550322"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286550322"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286550322"}]},"ts":"1689286550322"} 2023-07-13 22:15:50,322 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=55127ca3adf95b298c91354fbf9b3a53, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:50,322 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=e3c5e880d498035e2142ccaedf2234a8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:50,323 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286550322"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286550322"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286550322"}]},"ts":"1689286550322"} 2023-07-13 22:15:50,323 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286550322"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286550322"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286550322"}]},"ts":"1689286550322"} 2023-07-13 22:15:50,323 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=3c0463ab38a0d51ef16a8716f7feb186, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:50,323 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=fd8e418349c7fcbc45308496905827b5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:50,323 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286550323"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286550323"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286550323"}]},"ts":"1689286550323"} 2023-07-13 22:15:50,323 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286550323"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286550323"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286550323"}]},"ts":"1689286550323"} 2023-07-13 22:15:50,325 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=64, state=RUNNABLE; CloseRegionProcedure da3ae932fd2f947927cdc24fd85b29e0, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:50,326 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=66, state=RUNNABLE; CloseRegionProcedure 55127ca3adf95b298c91354fbf9b3a53, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:50,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=68, state=RUNNABLE; CloseRegionProcedure e3c5e880d498035e2142ccaedf2234a8, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:50,328 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=67, state=RUNNABLE; CloseRegionProcedure 3c0463ab38a0d51ef16a8716f7feb186, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:50,331 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=65, state=RUNNABLE; CloseRegionProcedure fd8e418349c7fcbc45308496905827b5, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:50,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-13 22:15:50,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:50,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da3ae932fd2f947927cdc24fd85b29e0, disabling compactions & flushes 2023-07-13 22:15:50,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:50,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:50,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. after waiting 0 ms 2023-07-13 22:15:50,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:50,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:50,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c0463ab38a0d51ef16a8716f7feb186, disabling compactions & flushes 2023-07-13 22:15:50,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:50,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:50,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:50,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. after waiting 0 ms 2023-07-13 22:15:50,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:50,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0. 2023-07-13 22:15:50,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da3ae932fd2f947927cdc24fd85b29e0: 2023-07-13 22:15:50,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:50,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:50,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55127ca3adf95b298c91354fbf9b3a53, disabling compactions & flushes 2023-07-13 22:15:50,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:50,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:50,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. after waiting 0 ms 2023-07-13 22:15:50,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:50,490 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=da3ae932fd2f947927cdc24fd85b29e0, regionState=CLOSED 2023-07-13 22:15:50,490 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286550489"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286550489"}]},"ts":"1689286550489"} 2023-07-13 22:15:50,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:50,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186. 2023-07-13 22:15:50,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c0463ab38a0d51ef16a8716f7feb186: 2023-07-13 22:15:50,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:50,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53. 2023-07-13 22:15:50,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55127ca3adf95b298c91354fbf9b3a53: 2023-07-13 22:15:50,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:50,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:50,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fd8e418349c7fcbc45308496905827b5, disabling compactions & flushes 2023-07-13 22:15:50,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:50,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:50,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. after waiting 0 ms 2023-07-13 22:15:50,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:50,502 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=64 2023-07-13 22:15:50,502 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=3c0463ab38a0d51ef16a8716f7feb186, regionState=CLOSED 2023-07-13 22:15:50,502 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=64, state=SUCCESS; CloseRegionProcedure da3ae932fd2f947927cdc24fd85b29e0, server=jenkins-hbase4.apache.org,39109,1689286541053 in 167 msec 2023-07-13 22:15:50,502 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286550502"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286550502"}]},"ts":"1689286550502"} 2023-07-13 22:15:50,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:50,504 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=55127ca3adf95b298c91354fbf9b3a53, regionState=CLOSED 2023-07-13 22:15:50,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:50,505 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da3ae932fd2f947927cdc24fd85b29e0, UNASSIGN in 184 msec 2023-07-13 22:15:50,505 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286550504"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286550504"}]},"ts":"1689286550504"} 2023-07-13 22:15:50,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e3c5e880d498035e2142ccaedf2234a8, disabling compactions & flushes 2023-07-13 22:15:50,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:50,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:50,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. after waiting 0 ms 2023-07-13 22:15:50,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:50,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=67 2023-07-13 22:15:50,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=67, state=SUCCESS; CloseRegionProcedure 3c0463ab38a0d51ef16a8716f7feb186, server=jenkins-hbase4.apache.org,38543,1689286541242 in 179 msec 2023-07-13 22:15:50,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=66 2023-07-13 22:15:50,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; CloseRegionProcedure 55127ca3adf95b298c91354fbf9b3a53, server=jenkins-hbase4.apache.org,39109,1689286541053 in 182 msec 2023-07-13 22:15:50,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c0463ab38a0d51ef16a8716f7feb186, UNASSIGN in 192 msec 2023-07-13 22:15:50,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55127ca3adf95b298c91354fbf9b3a53, UNASSIGN in 193 msec 2023-07-13 22:15:50,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:50,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5. 2023-07-13 22:15:50,515 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fd8e418349c7fcbc45308496905827b5: 2023-07-13 22:15:50,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:50,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:50,518 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=fd8e418349c7fcbc45308496905827b5, regionState=CLOSED 2023-07-13 22:15:50,518 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689286550517"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286550517"}]},"ts":"1689286550517"} 2023-07-13 22:15:50,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8. 2023-07-13 22:15:50,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e3c5e880d498035e2142ccaedf2234a8: 2023-07-13 22:15:50,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:50,520 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=e3c5e880d498035e2142ccaedf2234a8, regionState=CLOSED 2023-07-13 22:15:50,520 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689286550520"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286550520"}]},"ts":"1689286550520"} 2023-07-13 22:15:50,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=65 2023-07-13 22:15:50,522 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=65, state=SUCCESS; CloseRegionProcedure fd8e418349c7fcbc45308496905827b5, server=jenkins-hbase4.apache.org,38543,1689286541242 in 188 msec 2023-07-13 22:15:50,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fd8e418349c7fcbc45308496905827b5, UNASSIGN in 204 msec 2023-07-13 22:15:50,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=68 2023-07-13 22:15:50,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=68, state=SUCCESS; CloseRegionProcedure e3c5e880d498035e2142ccaedf2234a8, server=jenkins-hbase4.apache.org,39109,1689286541053 in 195 msec 2023-07-13 22:15:50,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=63 2023-07-13 22:15:50,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3c5e880d498035e2142ccaedf2234a8, UNASSIGN in 206 msec 2023-07-13 22:15:50,527 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286550527"}]},"ts":"1689286550527"} 2023-07-13 22:15:50,528 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-13 22:15:50,530 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-13 22:15:50,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 226 msec 2023-07-13 22:15:50,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-13 22:15:50,613 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-13 22:15:50,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,627 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1384789718' 2023-07-13 22:15:50,628 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:50,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:50,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:50,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 22:15:50,644 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:50,644 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:50,644 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:50,644 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:50,644 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:50,648 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/recovered.edits] 2023-07-13 22:15:50,650 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/recovered.edits] 2023-07-13 22:15:50,650 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/recovered.edits] 2023-07-13 22:15:50,651 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/recovered.edits] 2023-07-13 22:15:50,651 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/recovered.edits] 2023-07-13 22:15:50,665 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53/recovered.edits/4.seqid 2023-07-13 22:15:50,666 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55127ca3adf95b298c91354fbf9b3a53 2023-07-13 22:15:50,667 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0/recovered.edits/4.seqid 2023-07-13 22:15:50,667 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186/recovered.edits/4.seqid 2023-07-13 22:15:50,667 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5/recovered.edits/4.seqid 2023-07-13 22:15:50,668 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8/recovered.edits/4.seqid 2023-07-13 22:15:50,669 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da3ae932fd2f947927cdc24fd85b29e0 2023-07-13 22:15:50,669 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c0463ab38a0d51ef16a8716f7feb186 2023-07-13 22:15:50,669 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fd8e418349c7fcbc45308496905827b5 2023-07-13 22:15:50,669 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3c5e880d498035e2142ccaedf2234a8 2023-07-13 22:15:50,670 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 22:15:50,673 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,680 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-13 22:15:50,683 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-13 22:15:50,684 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,684 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-13 22:15:50,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286550684"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:50,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286550684"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:50,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286550684"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:50,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286550684"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:50,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286550684"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:50,687 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 22:15:50,687 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => da3ae932fd2f947927cdc24fd85b29e0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689286549254.da3ae932fd2f947927cdc24fd85b29e0.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => fd8e418349c7fcbc45308496905827b5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689286549254.fd8e418349c7fcbc45308496905827b5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 55127ca3adf95b298c91354fbf9b3a53, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689286549254.55127ca3adf95b298c91354fbf9b3a53.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3c0463ab38a0d51ef16a8716f7feb186, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689286549254.3c0463ab38a0d51ef16a8716f7feb186.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e3c5e880d498035e2142ccaedf2234a8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689286549254.e3c5e880d498035e2142ccaedf2234a8.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 22:15:50,687 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-13 22:15:50,687 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286550687"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:50,689 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-13 22:15:50,691 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 22:15:50,693 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 72 msec 2023-07-13 22:15:50,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 22:15:50,744 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-13 22:15:50,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:50,754 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38543] ipc.CallRunner(144): callId: 156 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:33760 deadline: 1689286610754, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43571 startCode=1689286544760. As of locationSeqNum=6. 2023-07-13 22:15:50,858 DEBUG [hconnection-0x6e0626cd-shared-pool-9] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:50,868 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42566, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:50,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:50,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:50,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:50,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:50,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:50,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup default 2023-07-13 22:15:50,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:50,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:50,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:50,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1384789718, current retry=0 2023-07-13 22:15:50,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1384789718 => default 2023-07-13 22:15:50,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:50,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1384789718 2023-07-13 22:15:50,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:50,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:50,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:15:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:50,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:50,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:50,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:50,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:50,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:50,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:15:50,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:50,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:15:50,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:50,930 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:15:50,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:50,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:50,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:50,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:50,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:50,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:50,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:50,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:50,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:50,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287750944, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:50,945 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:50,947 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:50,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:50,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:50,948 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:50,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:50,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:50,979 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=495 (was 424) Potentially hanging thread: PacketResponder: BP-1576339184-172.31.14.131-1689286535402:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43571Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:42191 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2082329937-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_571360115_17 at /127.0.0.1:40360 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:40282 [Receiving block BP-1576339184-172.31.14.131-1689286535402:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42191 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43571-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2082329937-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2082329937-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2082329937-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54493@0x6f7c8fc8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1268139192_17 at /127.0.0.1:60426 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54493@0x6f7c8fc8-SendThread(127.0.0.1:54493) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1576339184-172.31.14.131-1689286535402:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2082329937-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43571 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2082329937-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-d4c50ed-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2082329937-638-acceptor-0@660341a4-ServerConnector@b9fe3db{HTTP/1.1, (http/1.1)}{0.0.0.0:34961} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c-prefix:jenkins-hbase4.apache.org,43571,1689286544760 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:60438 [Receiving block BP-1576339184-172.31.14.131-1689286535402:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-888658984_17 at /127.0.0.1:45722 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1576339184-172.31.14.131-1689286535402:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:45654 [Receiving block BP-1576339184-172.31.14.131-1689286535402:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2082329937-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54493@0x6f7c8fc8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=779 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 364) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=4516 (was 5195) 2023-07-13 22:15:50,999 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=495, OpenFileDescriptor=779, MaxFileDescriptor=60000, SystemLoadAverage=385, ProcessCount=172, AvailableMemoryMB=4515 2023-07-13 22:15:51,001 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-13 22:15:51,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:51,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:51,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:51,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:51,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:51,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:15:51,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:15:51,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:51,025 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:15:51,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:51,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:51,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:51,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:51,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:51,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287751038, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:51,039 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:51,041 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:51,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,042 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:51,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:51,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:51,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-13 22:15:51,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:51,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:59834 deadline: 1689287751044, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 22:15:51,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-13 22:15:51,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:51,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:59834 deadline: 1689287751046, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 22:15:51,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-13 22:15:51,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:51,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:59834 deadline: 1689287751047, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 22:15:51,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-13 22:15:51,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-13 22:15:51,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:51,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:51,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:51,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:51,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:51,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:51,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:51,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-13 22:15:51,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:15:51,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:51,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:51,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:51,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:51,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:51,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:51,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:15:51,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:15:51,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:51,088 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:15:51,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:51,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:51,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:51,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:51,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:51,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287751103, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:51,104 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:51,106 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:51,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,107 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:51,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:51,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:51,131 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 495) Potentially hanging thread: hconnection-0x122baacf-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=779 (was 779), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 385), ProcessCount=172 (was 172), AvailableMemoryMB=4511 (was 4515) 2023-07-13 22:15:51,151 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=779, MaxFileDescriptor=60000, SystemLoadAverage=385, ProcessCount=172, AvailableMemoryMB=4511 2023-07-13 22:15:51,153 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-13 22:15:51,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:51,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:51,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:51,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:51,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:51,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:15:51,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:15:51,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:51,177 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:15:51,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:51,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:51,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:51,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:51,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:51,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287751201, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:51,202 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:51,204 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:51,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,205 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:51,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:51,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:51,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:51,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:51,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-13 22:15:51,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 22:15:51,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:51,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:51,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:51,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:51,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39325] to rsgroup bar 2023-07-13 22:15:51,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:51,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 22:15:51,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:51,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:51,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(238): Moving server region c215608c4a51d4b80df51dd910f81bab, which do not belong to RSGroup bar 2023-07-13 22:15:51,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, REOPEN/MOVE 2023-07-13 22:15:51,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-13 22:15:51,234 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, REOPEN/MOVE 2023-07-13 22:15:51,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 22:15:51,234 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=c215608c4a51d4b80df51dd910f81bab, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:51,236 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 22:15:51,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-13 22:15:51,236 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286551234"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286551234"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286551234"}]},"ts":"1689286551234"} 2023-07-13 22:15:51,236 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39325,1689286540864, state=CLOSING 2023-07-13 22:15:51,239 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:51,239 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 22:15:51,240 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:15:51,241 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:15:51,240 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:15:51,394 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-13 22:15:51,395 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:15:51,395 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:15:51,396 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:15:51,396 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:15:51,396 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:15:51,396 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=41.98 KB heapSize=64.98 KB 2023-07-13 22:15:51,423 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/info/924d158f82414a86a7f8cec7007386a2 2023-07-13 22:15:51,431 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 924d158f82414a86a7f8cec7007386a2 2023-07-13 22:15:51,447 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/rep_barrier/e1a27cbbc9cf498e8c2461dd1442f0f6 2023-07-13 22:15:51,454 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e1a27cbbc9cf498e8c2461dd1442f0f6 2023-07-13 22:15:51,470 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/table/219ad606ec8742e6b248315aa0823cf2 2023-07-13 22:15:51,475 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 219ad606ec8742e6b248315aa0823cf2 2023-07-13 22:15:51,476 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/info/924d158f82414a86a7f8cec7007386a2 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info/924d158f82414a86a7f8cec7007386a2 2023-07-13 22:15:51,483 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 924d158f82414a86a7f8cec7007386a2 2023-07-13 22:15:51,484 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info/924d158f82414a86a7f8cec7007386a2, entries=46, sequenceid=95, filesize=10.2 K 2023-07-13 22:15:51,486 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/rep_barrier/e1a27cbbc9cf498e8c2461dd1442f0f6 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier/e1a27cbbc9cf498e8c2461dd1442f0f6 2023-07-13 22:15:51,492 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e1a27cbbc9cf498e8c2461dd1442f0f6 2023-07-13 22:15:51,493 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier/e1a27cbbc9cf498e8c2461dd1442f0f6, entries=10, sequenceid=95, filesize=6.1 K 2023-07-13 22:15:51,494 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/table/219ad606ec8742e6b248315aa0823cf2 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table/219ad606ec8742e6b248315aa0823cf2 2023-07-13 22:15:51,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 219ad606ec8742e6b248315aa0823cf2 2023-07-13 22:15:51,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table/219ad606ec8742e6b248315aa0823cf2, entries=15, sequenceid=95, filesize=6.2 K 2023-07-13 22:15:51,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~41.98 KB/42987, heapSize ~64.94 KB/66496, currentSize=0 B/0 for 1588230740 in 106ms, sequenceid=95, compaction requested=false 2023-07-13 22:15:51,515 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-13 22:15:51,516 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:15:51,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:15:51,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:15:51,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,43571,1689286544760 record at close sequenceid=95 2023-07-13 22:15:51,519 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-13 22:15:51,519 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-13 22:15:51,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-13 22:15:51,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39325,1689286540864 in 279 msec 2023-07-13 22:15:51,522 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:51,672 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43571,1689286544760, state=OPENING 2023-07-13 22:15:51,674 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 22:15:51,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:51,678 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:15:51,834 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 22:15:51,834 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:15:51,836 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43571%2C1689286544760.meta, suffix=.meta, logDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,43571,1689286544760, archiveDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs, maxLogs=32 2023-07-13 22:15:51,854 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK] 2023-07-13 22:15:51,858 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK] 2023-07-13 22:15:51,860 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK] 2023-07-13 22:15:51,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,43571,1689286544760/jenkins-hbase4.apache.org%2C43571%2C1689286544760.meta.1689286551837.meta 2023-07-13 22:15:51,862 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43751,DS-376770af-b2f3-4ff0-acd7-139c06bd622e,DISK], DatanodeInfoWithStorage[127.0.0.1:35707,DS-fe2e9fcd-ca8e-4ac9-9939-793d545e84c4,DISK], DatanodeInfoWithStorage[127.0.0.1:46097,DS-1e20c75e-c81b-4953-8390-e2cbd5b4b836,DISK]] 2023-07-13 22:15:51,862 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:51,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:15:51,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 22:15:51,863 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 22:15:51,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 22:15:51,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:51,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 22:15:51,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 22:15:51,865 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:15:51,866 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info 2023-07-13 22:15:51,866 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info 2023-07-13 22:15:51,866 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:15:51,880 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 924d158f82414a86a7f8cec7007386a2 2023-07-13 22:15:51,880 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info/924d158f82414a86a7f8cec7007386a2 2023-07-13 22:15:51,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:51,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:15:51,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:15:51,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:15:51,883 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:15:51,893 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e1a27cbbc9cf498e8c2461dd1442f0f6 2023-07-13 22:15:51,893 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier/e1a27cbbc9cf498e8c2461dd1442f0f6 2023-07-13 22:15:51,893 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:51,893 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:15:51,894 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table 2023-07-13 22:15:51,894 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table 2023-07-13 22:15:51,895 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:15:51,905 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 219ad606ec8742e6b248315aa0823cf2 2023-07-13 22:15:51,906 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table/219ad606ec8742e6b248315aa0823cf2 2023-07-13 22:15:51,906 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:51,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740 2023-07-13 22:15:51,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740 2023-07-13 22:15:51,911 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:15:51,913 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:15:51,914 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=99; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10844239840, jitterRate=0.00994853675365448}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:15:51,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:15:51,915 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689286551830 2023-07-13 22:15:51,917 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 22:15:51,918 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 22:15:51,918 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43571,1689286544760, state=OPEN 2023-07-13 22:15:51,920 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 22:15:51,920 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:15:51,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-13 22:15:51,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43571,1689286544760 in 246 msec 2023-07-13 22:15:51,924 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 688 msec 2023-07-13 22:15:52,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c215608c4a51d4b80df51dd910f81bab, disabling compactions & flushes 2023-07-13 22:15:52,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. after waiting 0 ms 2023-07-13 22:15:52,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c215608c4a51d4b80df51dd910f81bab 1/1 column families, dataSize=6.37 KB heapSize=10.52 KB 2023-07-13 22:15:52,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.37 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/.tmp/m/343abf7fc9c246368b0c57629f811fac 2023-07-13 22:15:52,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 343abf7fc9c246368b0c57629f811fac 2023-07-13 22:15:52,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/.tmp/m/343abf7fc9c246368b0c57629f811fac as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m/343abf7fc9c246368b0c57629f811fac 2023-07-13 22:15:52,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 343abf7fc9c246368b0c57629f811fac 2023-07-13 22:15:52,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m/343abf7fc9c246368b0c57629f811fac, entries=9, sequenceid=26, filesize=5.5 K 2023-07-13 22:15:52,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.37 KB/6527, heapSize ~10.50 KB/10752, currentSize=0 B/0 for c215608c4a51d4b80df51dd910f81bab in 49ms, sequenceid=26, compaction requested=false 2023-07-13 22:15:52,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-13 22:15:52,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:15:52,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c215608c4a51d4b80df51dd910f81bab: 2023-07-13 22:15:52,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c215608c4a51d4b80df51dd910f81bab move to jenkins-hbase4.apache.org,43571,1689286544760 record at close sequenceid=26 2023-07-13 22:15:52,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,140 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=c215608c4a51d4b80df51dd910f81bab, regionState=CLOSED 2023-07-13 22:15:52,140 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286552140"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286552140"}]},"ts":"1689286552140"} 2023-07-13 22:15:52,141 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39325] ipc.CallRunner(144): callId: 186 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:40320 deadline: 1689286612141, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43571 startCode=1689286544760. As of locationSeqNum=95. 2023-07-13 22:15:52,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-13 22:15:52,248 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-13 22:15:52,248 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,39325,1689286540864 in 1.0050 sec 2023-07-13 22:15:52,248 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:52,399 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=c215608c4a51d4b80df51dd910f81bab, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:52,399 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286552399"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286552399"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286552399"}]},"ts":"1689286552399"} 2023-07-13 22:15:52,401 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:52,559 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,559 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c215608c4a51d4b80df51dd910f81bab, NAME => 'hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:52,559 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:15:52,559 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. service=MultiRowMutationService 2023-07-13 22:15:52,560 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 22:15:52,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:52,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,563 INFO [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,564 DEBUG [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m 2023-07-13 22:15:52,564 DEBUG [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m 2023-07-13 22:15:52,564 INFO [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c215608c4a51d4b80df51dd910f81bab columnFamilyName m 2023-07-13 22:15:52,574 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 343abf7fc9c246368b0c57629f811fac 2023-07-13 22:15:52,575 DEBUG [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] regionserver.HStore(539): loaded hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m/343abf7fc9c246368b0c57629f811fac 2023-07-13 22:15:52,575 INFO [StoreOpener-c215608c4a51d4b80df51dd910f81bab-1] regionserver.HStore(310): Store=c215608c4a51d4b80df51dd910f81bab/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:52,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,581 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:15:52,582 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c215608c4a51d4b80df51dd910f81bab; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1a8b1c6c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:52,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c215608c4a51d4b80df51dd910f81bab: 2023-07-13 22:15:52,583 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab., pid=80, masterSystemTime=1689286552553 2023-07-13 22:15:52,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:15:52,585 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=c215608c4a51d4b80df51dd910f81bab, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:52,585 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286552585"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286552585"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286552585"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286552585"}]},"ts":"1689286552585"} 2023-07-13 22:15:52,588 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-13 22:15:52,588 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure c215608c4a51d4b80df51dd910f81bab, server=jenkins-hbase4.apache.org,43571,1689286544760 in 185 msec 2023-07-13 22:15:52,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c215608c4a51d4b80df51dd910f81bab, REOPEN/MOVE in 1.3570 sec 2023-07-13 22:15:53,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053, jenkins-hbase4.apache.org,39325,1689286540864] are moved back to default 2023-07-13 22:15:53,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-13 22:15:53,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:53,238 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39325] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:40338 deadline: 1689286613238, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43571 startCode=1689286544760. As of locationSeqNum=26. 2023-07-13 22:15:53,341 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39325] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:40338 deadline: 1689286613341, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43571 startCode=1689286544760. As of locationSeqNum=95. 2023-07-13 22:15:53,444 DEBUG [hconnection-0x122baacf-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:53,445 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42578, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:53,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:53,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:53,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-13 22:15:53,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:53,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:53,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:53,470 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:15:53,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-13 22:15:53,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 22:15:53,472 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39325] ipc.CallRunner(144): callId: 191 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:40320 deadline: 1689286613472, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43571 startCode=1689286544760. As of locationSeqNum=26. 2023-07-13 22:15:53,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 22:15:53,578 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:53,578 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 22:15:53,579 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:53,579 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:53,582 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:15:53,584 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,585 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 empty. 2023-07-13 22:15:53,585 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,585 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-13 22:15:53,603 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:53,605 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ef3af33d56fc846a5985c762395e7326, NAME => 'Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:53,618 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:53,618 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing ef3af33d56fc846a5985c762395e7326, disabling compactions & flushes 2023-07-13 22:15:53,618 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,618 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,618 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. after waiting 0 ms 2023-07-13 22:15:53,619 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,619 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,619 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:53,622 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:15:53,623 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286553623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286553623"}]},"ts":"1689286553623"} 2023-07-13 22:15:53,625 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:15:53,626 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:15:53,626 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286553626"}]},"ts":"1689286553626"} 2023-07-13 22:15:53,627 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-13 22:15:53,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, ASSIGN}] 2023-07-13 22:15:53,638 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, ASSIGN 2023-07-13 22:15:53,639 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:53,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 22:15:53,790 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:53,790 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286553790"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286553790"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286553790"}]},"ts":"1689286553790"} 2023-07-13 22:15:53,793 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:53,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ef3af33d56fc846a5985c762395e7326, NAME => 'Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:53,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:53,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,952 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,954 DEBUG [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f 2023-07-13 22:15:53,954 DEBUG [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f 2023-07-13 22:15:53,955 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ef3af33d56fc846a5985c762395e7326 columnFamilyName f 2023-07-13 22:15:53,955 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] regionserver.HStore(310): Store=ef3af33d56fc846a5985c762395e7326/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:53,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:53,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:53,964 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ef3af33d56fc846a5985c762395e7326; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10789905440, jitterRate=0.004888251423835754}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:53,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:53,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326., pid=83, masterSystemTime=1689286553945 2023-07-13 22:15:53,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:53,973 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:53,973 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286553972"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286553972"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286553972"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286553972"}]},"ts":"1689286553972"} 2023-07-13 22:15:53,980 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-13 22:15:53,980 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760 in 182 msec 2023-07-13 22:15:53,982 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-13 22:15:53,982 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, ASSIGN in 344 msec 2023-07-13 22:15:53,983 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:15:53,983 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286553983"}]},"ts":"1689286553983"} 2023-07-13 22:15:53,984 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-13 22:15:53,987 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:15:53,988 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 521 msec 2023-07-13 22:15:54,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 22:15:54,077 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-13 22:15:54,077 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-13 22:15:54,077 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:54,080 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39325] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:40332 deadline: 1689286614080, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43571 startCode=1689286544760. As of locationSeqNum=95. 2023-07-13 22:15:54,184 DEBUG [hconnection-0x2414dac3-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:15:54,200 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42586, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:15:54,208 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-13 22:15:54,209 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:54,209 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-13 22:15:54,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-13 22:15:54,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:54,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 22:15:54,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:54,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:54,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-13 22:15:54,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region ef3af33d56fc846a5985c762395e7326 to RSGroup bar 2023-07-13 22:15:54,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:54,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:54,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:54,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:54,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 22:15:54,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:54,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE 2023-07-13 22:15:54,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-13 22:15:54,222 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE 2023-07-13 22:15:54,223 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:54,223 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286554223"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286554223"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286554223"}]},"ts":"1689286554223"} 2023-07-13 22:15:54,225 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:54,274 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 22:15:54,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ef3af33d56fc846a5985c762395e7326, disabling compactions & flushes 2023-07-13 22:15:54,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. after waiting 0 ms 2023-07-13 22:15:54,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:54,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:54,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ef3af33d56fc846a5985c762395e7326 move to jenkins-hbase4.apache.org,38543,1689286541242 record at close sequenceid=2 2023-07-13 22:15:54,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,391 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=CLOSED 2023-07-13 22:15:54,391 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286554391"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286554391"}]},"ts":"1689286554391"} 2023-07-13 22:15:54,395 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-13 22:15:54,395 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760 in 168 msec 2023-07-13 22:15:54,395 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:54,546 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:15:54,546 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:54,546 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286554546"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286554546"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286554546"}]},"ts":"1689286554546"} 2023-07-13 22:15:54,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:54,708 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ef3af33d56fc846a5985c762395e7326, NAME => 'Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:54,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:54,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,714 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,716 DEBUG [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f 2023-07-13 22:15:54,716 DEBUG [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f 2023-07-13 22:15:54,716 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ef3af33d56fc846a5985c762395e7326 columnFamilyName f 2023-07-13 22:15:54,717 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] regionserver.HStore(310): Store=ef3af33d56fc846a5985c762395e7326/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:54,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:54,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ef3af33d56fc846a5985c762395e7326; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10456037600, jitterRate=-0.02620561420917511}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:54,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:54,726 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326., pid=86, masterSystemTime=1689286554703 2023-07-13 22:15:54,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:54,734 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:54,735 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286554734"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286554734"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286554734"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286554734"}]},"ts":"1689286554734"} 2023-07-13 22:15:54,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-13 22:15:54,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,38543,1689286541242 in 188 msec 2023-07-13 22:15:54,741 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE in 520 msec 2023-07-13 22:15:55,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-13 22:15:55,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-13 22:15:55,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:55,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:55,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:55,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-13 22:15:55,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:55,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-13 22:15:55,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:55,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:59834 deadline: 1689287755230, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-13 22:15:55,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39325] to rsgroup default 2023-07-13 22:15:55,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:55,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:59834 deadline: 1689287755231, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-13 22:15:55,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-13 22:15:55,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:55,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 22:15:55,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:55,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:55,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-13 22:15:55,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region ef3af33d56fc846a5985c762395e7326 to RSGroup default 2023-07-13 22:15:55,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE 2023-07-13 22:15:55,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 22:15:55,241 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE 2023-07-13 22:15:55,242 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:55,242 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286555242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286555242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286555242"}]},"ts":"1689286555242"} 2023-07-13 22:15:55,246 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:55,250 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-13 22:15:55,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ef3af33d56fc846a5985c762395e7326, disabling compactions & flushes 2023-07-13 22:15:55,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. after waiting 0 ms 2023-07-13 22:15:55,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:55,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:55,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ef3af33d56fc846a5985c762395e7326 move to jenkins-hbase4.apache.org,43571,1689286544760 record at close sequenceid=5 2023-07-13 22:15:55,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,410 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=CLOSED 2023-07-13 22:15:55,410 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286555410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286555410"}]},"ts":"1689286555410"} 2023-07-13 22:15:55,413 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-13 22:15:55,413 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,38543,1689286541242 in 169 msec 2023-07-13 22:15:55,414 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:15:55,565 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:55,565 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286555564"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286555564"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286555564"}]},"ts":"1689286555564"} 2023-07-13 22:15:55,568 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:55,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ef3af33d56fc846a5985c762395e7326, NAME => 'Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:55,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:55,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,728 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,730 DEBUG [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f 2023-07-13 22:15:55,731 DEBUG [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f 2023-07-13 22:15:55,731 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ef3af33d56fc846a5985c762395e7326 columnFamilyName f 2023-07-13 22:15:55,732 INFO [StoreOpener-ef3af33d56fc846a5985c762395e7326-1] regionserver.HStore(310): Store=ef3af33d56fc846a5985c762395e7326/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:55,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:55,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ef3af33d56fc846a5985c762395e7326; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11997134560, jitterRate=0.1173202246427536}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:55,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:55,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326., pid=89, masterSystemTime=1689286555720 2023-07-13 22:15:55,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:55,744 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:55,745 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286555744"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286555744"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286555744"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286555744"}]},"ts":"1689286555744"} 2023-07-13 22:15:55,748 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-13 22:15:55,748 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760 in 178 msec 2023-07-13 22:15:55,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, REOPEN/MOVE in 510 msec 2023-07-13 22:15:56,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-13 22:15:56,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-13 22:15:56,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:56,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-13 22:15:56,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:56,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:59834 deadline: 1689287756253, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-13 22:15:56,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39325] to rsgroup default 2023-07-13 22:15:56,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 22:15:56,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:56,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-13 22:15:56,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053, jenkins-hbase4.apache.org,39325,1689286540864] are moved back to bar 2023-07-13 22:15:56,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-13 22:15:56,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:56,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-13 22:15:56,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:15:56,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:56,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,294 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-13 22:15:56,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-13 22:15:56,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-13 22:15:56,299 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286556299"}]},"ts":"1689286556299"} 2023-07-13 22:15:56,301 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-13 22:15:56,303 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-13 22:15:56,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, UNASSIGN}] 2023-07-13 22:15:56,306 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, UNASSIGN 2023-07-13 22:15:56,306 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:15:56,307 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286556306"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286556306"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286556306"}]},"ts":"1689286556306"} 2023-07-13 22:15:56,311 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:15:56,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-13 22:15:56,463 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:56,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ef3af33d56fc846a5985c762395e7326, disabling compactions & flushes 2023-07-13 22:15:56,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:56,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:56,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. after waiting 0 ms 2023-07-13 22:15:56,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:56,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 22:15:56,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326. 2023-07-13 22:15:56,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ef3af33d56fc846a5985c762395e7326: 2023-07-13 22:15:56,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:56,473 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ef3af33d56fc846a5985c762395e7326, regionState=CLOSED 2023-07-13 22:15:56,473 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689286556473"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286556473"}]},"ts":"1689286556473"} 2023-07-13 22:15:56,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-13 22:15:56,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure ef3af33d56fc846a5985c762395e7326, server=jenkins-hbase4.apache.org,43571,1689286544760 in 164 msec 2023-07-13 22:15:56,481 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-13 22:15:56,481 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ef3af33d56fc846a5985c762395e7326, UNASSIGN in 176 msec 2023-07-13 22:15:56,482 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286556482"}]},"ts":"1689286556482"} 2023-07-13 22:15:56,483 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-13 22:15:56,485 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-13 22:15:56,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 191 msec 2023-07-13 22:15:56,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-13 22:15:56,602 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-13 22:15:56,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-13 22:15:56,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,606 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-13 22:15:56,607 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:56,612 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:56,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-13 22:15:56,614 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits] 2023-07-13 22:15:56,620 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits/10.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326/recovered.edits/10.seqid 2023-07-13 22:15:56,621 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testFailRemoveGroup/ef3af33d56fc846a5985c762395e7326 2023-07-13 22:15:56,621 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-13 22:15:56,624 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,626 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-13 22:15:56,628 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-13 22:15:56,630 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,630 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-13 22:15:56,630 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286556630"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:56,632 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 22:15:56,632 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ef3af33d56fc846a5985c762395e7326, NAME => 'Group_testFailRemoveGroup,,1689286553466.ef3af33d56fc846a5985c762395e7326.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 22:15:56,632 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-13 22:15:56,632 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286556632"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:56,633 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-13 22:15:56,635 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 22:15:56,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 32 msec 2023-07-13 22:15:56,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-13 22:15:56,715 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-13 22:15:56,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:56,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:56,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:56,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:15:56,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:15:56,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:56,743 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:15:56,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:56,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:56,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:56,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:56,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:56,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287756771, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:56,773 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:56,775 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:56,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,777 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:56,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:56,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:56,800 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512 (was 498) Potentially hanging thread: hconnection-0x2414dac3-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:55260 [Receiving block BP-1576339184-172.31.14.131-1689286535402:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1576339184-172.31.14.131-1689286535402:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1576339184-172.31.14.131-1689286535402:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1576339184-172.31.14.131-1689286535402:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_571360115_17 at /127.0.0.1:60426 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:41140 [Receiving block BP-1576339184-172.31.14.131-1689286535402:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c-prefix:jenkins-hbase4.apache.org,43571,1689286544760.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:38250 [Receiving block BP-1576339184-172.31.14.131-1689286535402:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:55310 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=788 (was 779) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=387 (was 385) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=4137 (was 4511) 2023-07-13 22:15:56,801 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 22:15:56,817 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=387, ProcessCount=172, AvailableMemoryMB=4135 2023-07-13 22:15:56,817 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 22:15:56,817 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-13 22:15:56,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:15:56,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:15:56,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:56,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:15:56,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:56,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:15:56,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:15:56,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:15:56,834 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:15:56,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:15:56,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:15:56,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:56,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:15:56,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:15:56,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287756846, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:15:56,846 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:15:56,850 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:56,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,851 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:15:56,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:56,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:56,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:56,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:56,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:56,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:15:56,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38543] to rsgroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:56,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 22:15:56,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242] are moved back to default 2023-07-13 22:15:56,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:15:56,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:56,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:56,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:56,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:56,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:56,878 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:15:56,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-13 22:15:56,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 22:15:56,880 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:56,880 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:56,880 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:56,881 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:56,886 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:15:56,887 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:56,888 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 empty. 2023-07-13 22:15:56,888 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:56,888 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-13 22:15:56,902 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:56,903 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 365b51db146791c6fcf6f71d33a066f0, NAME => 'GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:56,913 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:56,913 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 365b51db146791c6fcf6f71d33a066f0, disabling compactions & flushes 2023-07-13 22:15:56,913 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:56,913 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:56,913 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. after waiting 0 ms 2023-07-13 22:15:56,914 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:56,914 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:56,914 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 365b51db146791c6fcf6f71d33a066f0: 2023-07-13 22:15:56,916 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:15:56,917 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286556916"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286556916"}]},"ts":"1689286556916"} 2023-07-13 22:15:56,918 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:15:56,921 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:15:56,921 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286556921"}]},"ts":"1689286556921"} 2023-07-13 22:15:56,922 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-13 22:15:56,927 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:56,927 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:56,927 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:56,927 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:56,927 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:56,927 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, ASSIGN}] 2023-07-13 22:15:56,930 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, ASSIGN 2023-07-13 22:15:56,931 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:56,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 22:15:57,081 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:15:57,082 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:57,083 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286557082"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286557082"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286557082"}]},"ts":"1689286557082"} 2023-07-13 22:15:57,085 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:57,095 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 22:15:57,096 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 22:15:57,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 22:15:57,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:57,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 365b51db146791c6fcf6f71d33a066f0, NAME => 'GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,242 INFO [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,244 DEBUG [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/f 2023-07-13 22:15:57,244 DEBUG [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/f 2023-07-13 22:15:57,244 INFO [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 365b51db146791c6fcf6f71d33a066f0 columnFamilyName f 2023-07-13 22:15:57,245 INFO [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] regionserver.HStore(310): Store=365b51db146791c6fcf6f71d33a066f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:57,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:57,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:57,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 365b51db146791c6fcf6f71d33a066f0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9593600000, jitterRate=-0.10652637481689453}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:57,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 365b51db146791c6fcf6f71d33a066f0: 2023-07-13 22:15:57,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0., pid=96, masterSystemTime=1689286557237 2023-07-13 22:15:57,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:57,254 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:57,254 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:57,255 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286557254"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286557254"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286557254"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286557254"}]},"ts":"1689286557254"} 2023-07-13 22:15:57,258 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-13 22:15:57,258 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,39109,1689286541053 in 171 msec 2023-07-13 22:15:57,259 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-13 22:15:57,259 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, ASSIGN in 331 msec 2023-07-13 22:15:57,260 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:15:57,260 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286557260"}]},"ts":"1689286557260"} 2023-07-13 22:15:57,261 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-13 22:15:57,264 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:15:57,265 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 389 msec 2023-07-13 22:15:57,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 22:15:57,482 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-13 22:15:57,483 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-13 22:15:57,483 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:57,487 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-13 22:15:57,487 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:57,487 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-13 22:15:57,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:15:57,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:15:57,496 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:15:57,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-13 22:15:57,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 22:15:57,499 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:57,500 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:57,500 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:57,501 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:57,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 22:15:57,695 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:15:57,698 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:57,698 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 empty. 2023-07-13 22:15:57,699 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:57,699 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-13 22:15:57,730 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-13 22:15:57,732 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => edc46be13af59d0549fb7b00bab5c996, NAME => 'GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:15:57,761 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:57,761 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing edc46be13af59d0549fb7b00bab5c996, disabling compactions & flushes 2023-07-13 22:15:57,761 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:57,761 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:57,761 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. after waiting 0 ms 2023-07-13 22:15:57,762 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:57,762 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:57,762 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for edc46be13af59d0549fb7b00bab5c996: 2023-07-13 22:15:57,766 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:15:57,767 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286557767"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286557767"}]},"ts":"1689286557767"} 2023-07-13 22:15:57,769 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:15:57,770 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:15:57,770 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286557770"}]},"ts":"1689286557770"} 2023-07-13 22:15:57,771 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-13 22:15:57,776 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:15:57,776 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:15:57,776 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:15:57,776 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:15:57,776 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:15:57,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, ASSIGN}] 2023-07-13 22:15:57,778 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, ASSIGN 2023-07-13 22:15:57,779 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:15:57,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 22:15:57,930 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:15:57,931 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:57,931 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286557931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286557931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286557931"}]},"ts":"1689286557931"} 2023-07-13 22:15:57,933 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:58,089 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => edc46be13af59d0549fb7b00bab5c996, NAME => 'GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:58,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:58,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,091 INFO [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,093 DEBUG [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/f 2023-07-13 22:15:58,093 DEBUG [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/f 2023-07-13 22:15:58,093 INFO [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region edc46be13af59d0549fb7b00bab5c996 columnFamilyName f 2023-07-13 22:15:58,094 INFO [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] regionserver.HStore(310): Store=edc46be13af59d0549fb7b00bab5c996/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:58,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:15:58,101 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened edc46be13af59d0549fb7b00bab5c996; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10644425600, jitterRate=-0.008660614490509033}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:58,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for edc46be13af59d0549fb7b00bab5c996: 2023-07-13 22:15:58,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996., pid=99, masterSystemTime=1689286558085 2023-07-13 22:15:58,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,105 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:58,105 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558105"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286558105"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286558105"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286558105"}]},"ts":"1689286558105"} 2023-07-13 22:15:58,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-13 22:15:58,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,39109,1689286541053 in 176 msec 2023-07-13 22:15:58,114 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-13 22:15:58,114 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, ASSIGN in 335 msec 2023-07-13 22:15:58,115 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:15:58,115 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286558115"}]},"ts":"1689286558115"} 2023-07-13 22:15:58,117 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-13 22:15:58,120 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:15:58,122 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 630 msec 2023-07-13 22:15:58,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 22:15:58,192 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-13 22:15:58,192 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-13 22:15:58,192 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:58,198 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-13 22:15:58,198 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:58,199 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-13 22:15:58,199 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:15:58,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-13 22:15:58,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:15:58,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-13 22:15:58,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:15:58,215 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:58,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:58,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:58,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region edc46be13af59d0549fb7b00bab5c996 to RSGroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, REOPEN/MOVE 2023-07-13 22:15:58,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 365b51db146791c6fcf6f71d33a066f0 to RSGroup Group_testMultiTableMove_2005655762 2023-07-13 22:15:58,227 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, REOPEN/MOVE 2023-07-13 22:15:58,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, REOPEN/MOVE 2023-07-13 22:15:58,227 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:58,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_2005655762, current retry=0 2023-07-13 22:15:58,229 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, REOPEN/MOVE 2023-07-13 22:15:58,229 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558227"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286558227"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286558227"}]},"ts":"1689286558227"} 2023-07-13 22:15:58,230 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:15:58,230 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558230"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286558230"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286558230"}]},"ts":"1689286558230"} 2023-07-13 22:15:58,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:58,233 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:15:58,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing edc46be13af59d0549fb7b00bab5c996, disabling compactions & flushes 2023-07-13 22:15:58,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. after waiting 0 ms 2023-07-13 22:15:58,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:58,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for edc46be13af59d0549fb7b00bab5c996: 2023-07-13 22:15:58,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding edc46be13af59d0549fb7b00bab5c996 move to jenkins-hbase4.apache.org,38543,1689286541242 record at close sequenceid=2 2023-07-13 22:15:58,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 365b51db146791c6fcf6f71d33a066f0, disabling compactions & flushes 2023-07-13 22:15:58,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. after waiting 0 ms 2023-07-13 22:15:58,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,393 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=CLOSED 2023-07-13 22:15:58,393 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558393"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286558393"}]},"ts":"1689286558393"} 2023-07-13 22:15:58,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-13 22:15:58,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:15:58,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,39109,1689286541053 in 165 msec 2023-07-13 22:15:58,397 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:58,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 365b51db146791c6fcf6f71d33a066f0: 2023-07-13 22:15:58,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 365b51db146791c6fcf6f71d33a066f0 move to jenkins-hbase4.apache.org,38543,1689286541242 record at close sequenceid=2 2023-07-13 22:15:58,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,399 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=CLOSED 2023-07-13 22:15:58,399 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286558399"}]},"ts":"1689286558399"} 2023-07-13 22:15:58,402 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-13 22:15:58,402 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,39109,1689286541053 in 167 msec 2023-07-13 22:15:58,402 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38543,1689286541242; forceNewPlan=false, retain=false 2023-07-13 22:15:58,548 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:58,548 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:58,548 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286558547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286558547"}]},"ts":"1689286558547"} 2023-07-13 22:15:58,548 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286558547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286558547"}]},"ts":"1689286558547"} 2023-07-13 22:15:58,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:58,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:58,706 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => edc46be13af59d0549fb7b00bab5c996, NAME => 'GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:58,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:58,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,708 INFO [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,709 DEBUG [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/f 2023-07-13 22:15:58,709 DEBUG [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/f 2023-07-13 22:15:58,710 INFO [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region edc46be13af59d0549fb7b00bab5c996 columnFamilyName f 2023-07-13 22:15:58,711 INFO [StoreOpener-edc46be13af59d0549fb7b00bab5c996-1] regionserver.HStore(310): Store=edc46be13af59d0549fb7b00bab5c996/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:58,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:58,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened edc46be13af59d0549fb7b00bab5c996; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9790065760, jitterRate=-0.08822907507419586}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:58,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for edc46be13af59d0549fb7b00bab5c996: 2023-07-13 22:15:58,726 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996., pid=104, masterSystemTime=1689286558702 2023-07-13 22:15:58,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:58,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 365b51db146791c6fcf6f71d33a066f0, NAME => 'GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:15:58,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:15:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,729 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:58,730 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558729"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286558729"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286558729"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286558729"}]},"ts":"1689286558729"} 2023-07-13 22:15:58,732 INFO [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,733 DEBUG [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/f 2023-07-13 22:15:58,733 DEBUG [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/f 2023-07-13 22:15:58,734 INFO [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 365b51db146791c6fcf6f71d33a066f0 columnFamilyName f 2023-07-13 22:15:58,735 INFO [StoreOpener-365b51db146791c6fcf6f71d33a066f0-1] regionserver.HStore(310): Store=365b51db146791c6fcf6f71d33a066f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:15:58,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-13 22:15:58,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,38543,1689286541242 in 183 msec 2023-07-13 22:15:58,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,738 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, REOPEN/MOVE in 511 msec 2023-07-13 22:15:58,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:58,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 365b51db146791c6fcf6f71d33a066f0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9793499200, jitterRate=-0.08790931105613708}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:15:58,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 365b51db146791c6fcf6f71d33a066f0: 2023-07-13 22:15:58,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0., pid=105, masterSystemTime=1689286558702 2023-07-13 22:15:58,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:58,747 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:58,747 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286558746"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286558746"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286558746"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286558746"}]},"ts":"1689286558746"} 2023-07-13 22:15:58,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-13 22:15:58,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,38543,1689286541242 in 198 msec 2023-07-13 22:15:58,751 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, REOPEN/MOVE in 523 msec 2023-07-13 22:15:59,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-13 22:15:59,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_2005655762. 2023-07-13 22:15:59,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:15:59,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:15:59,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:15:59,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:15:59,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-13 22:15:59,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:15:59,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:15:59,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:59,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2005655762 2023-07-13 22:15:59,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:15:59,242 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-13 22:15:59,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-13 22:15:59,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 22:15:59,247 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286559246"}]},"ts":"1689286559246"} 2023-07-13 22:15:59,248 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-13 22:15:59,250 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-13 22:15:59,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, UNASSIGN}] 2023-07-13 22:15:59,255 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, UNASSIGN 2023-07-13 22:15:59,256 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:59,256 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286559256"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286559256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286559256"}]},"ts":"1689286559256"} 2023-07-13 22:15:59,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:59,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 22:15:59,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:59,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 365b51db146791c6fcf6f71d33a066f0, disabling compactions & flushes 2023-07-13 22:15:59,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:59,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:59,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. after waiting 0 ms 2023-07-13 22:15:59,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:59,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:59,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0. 2023-07-13 22:15:59,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 365b51db146791c6fcf6f71d33a066f0: 2023-07-13 22:15:59,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:59,420 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=365b51db146791c6fcf6f71d33a066f0, regionState=CLOSED 2023-07-13 22:15:59,421 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286559420"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286559420"}]},"ts":"1689286559420"} 2023-07-13 22:15:59,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-13 22:15:59,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 365b51db146791c6fcf6f71d33a066f0, server=jenkins-hbase4.apache.org,38543,1689286541242 in 165 msec 2023-07-13 22:15:59,431 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-13 22:15:59,431 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=365b51db146791c6fcf6f71d33a066f0, UNASSIGN in 173 msec 2023-07-13 22:15:59,433 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286559433"}]},"ts":"1689286559433"} 2023-07-13 22:15:59,436 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-13 22:15:59,438 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-13 22:15:59,442 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 198 msec 2023-07-13 22:15:59,452 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 22:15:59,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 22:15:59,549 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-13 22:15:59,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-13 22:15:59,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,552 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_2005655762' 2023-07-13 22:15:59,553 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:59,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:59,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:59,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:59,558 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:59,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-13 22:15:59,559 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/recovered.edits] 2023-07-13 22:15:59,564 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0/recovered.edits/7.seqid 2023-07-13 22:15:59,565 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveA/365b51db146791c6fcf6f71d33a066f0 2023-07-13 22:15:59,565 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-13 22:15:59,567 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,569 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-13 22:15:59,571 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-13 22:15:59,572 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,572 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-13 22:15:59,572 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286559572"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:59,573 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 22:15:59,573 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 365b51db146791c6fcf6f71d33a066f0, NAME => 'GrouptestMultiTableMoveA,,1689286556875.365b51db146791c6fcf6f71d33a066f0.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 22:15:59,574 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-13 22:15:59,574 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286559574"}]},"ts":"9223372036854775807"} 2023-07-13 22:15:59,575 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-13 22:15:59,577 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 22:15:59,578 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 27 msec 2023-07-13 22:15:59,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-13 22:15:59,660 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-13 22:15:59,661 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-13 22:15:59,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-13 22:15:59,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:15:59,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 22:15:59,666 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286559666"}]},"ts":"1689286559666"} 2023-07-13 22:15:59,668 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-13 22:15:59,669 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-13 22:15:59,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, UNASSIGN}] 2023-07-13 22:15:59,672 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, UNASSIGN 2023-07-13 22:15:59,674 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:15:59,674 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286559673"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286559673"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286559673"}]},"ts":"1689286559673"} 2023-07-13 22:15:59,675 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,38543,1689286541242}] 2023-07-13 22:15:59,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 22:15:59,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:59,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing edc46be13af59d0549fb7b00bab5c996, disabling compactions & flushes 2023-07-13 22:15:59,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:59,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:59,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. after waiting 0 ms 2023-07-13 22:15:59,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:59,834 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:15:59,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996. 2023-07-13 22:15:59,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for edc46be13af59d0549fb7b00bab5c996: 2023-07-13 22:15:59,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:59,838 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=edc46be13af59d0549fb7b00bab5c996, regionState=CLOSED 2023-07-13 22:15:59,838 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689286559838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286559838"}]},"ts":"1689286559838"} 2023-07-13 22:15:59,842 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-13 22:15:59,842 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure edc46be13af59d0549fb7b00bab5c996, server=jenkins-hbase4.apache.org,38543,1689286541242 in 165 msec 2023-07-13 22:15:59,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-13 22:15:59,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=edc46be13af59d0549fb7b00bab5c996, UNASSIGN in 172 msec 2023-07-13 22:15:59,845 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286559844"}]},"ts":"1689286559844"} 2023-07-13 22:15:59,846 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-13 22:15:59,854 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-13 22:15:59,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 194 msec 2023-07-13 22:15:59,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 22:15:59,968 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-13 22:15:59,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-13 22:15:59,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:15:59,973 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:15:59,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_2005655762' 2023-07-13 22:15:59,974 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:15:59,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:15:59,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:15:59,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:15:59,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:15:59,980 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:59,982 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/recovered.edits] 2023-07-13 22:15:59,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-13 22:15:59,989 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/recovered.edits/7.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996/recovered.edits/7.seqid 2023-07-13 22:15:59,990 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/GrouptestMultiTableMoveB/edc46be13af59d0549fb7b00bab5c996 2023-07-13 22:15:59,990 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-13 22:15:59,993 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:15:59,996 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-13 22:16:00,000 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-13 22:16:00,001 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:16:00,001 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-13 22:16:00,002 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286560001"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:00,004 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 22:16:00,004 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => edc46be13af59d0549fb7b00bab5c996, NAME => 'GrouptestMultiTableMoveB,,1689286557490.edc46be13af59d0549fb7b00bab5c996.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 22:16:00,004 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-13 22:16:00,004 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286560004"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:00,005 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-13 22:16:00,015 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 22:16:00,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 45 msec 2023-07-13 22:16:00,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-13 22:16:00,086 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-13 22:16:00,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:00,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:00,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:16:00,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:16:00,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38543] to rsgroup default 2023-07-13 22:16:00,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2005655762 2023-07-13 22:16:00,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_2005655762, current retry=0 2023-07-13 22:16:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242] are moved back to Group_testMultiTableMove_2005655762 2023-07-13 22:16:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_2005655762 => default 2023-07-13 22:16:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_2005655762 2023-07-13 22:16:00,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:00,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,115 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:00,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:00,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:00,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:00,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287760126, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:00,127 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:00,129 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:00,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,130 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:00,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,149 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512 (was 512), OpenFileDescriptor=788 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=387 (was 387), ProcessCount=172 (was 172), AvailableMemoryMB=3976 (was 4135) 2023-07-13 22:16:00,149 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 22:16:00,165 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=387, ProcessCount=172, AvailableMemoryMB=3975 2023-07-13 22:16:00,165 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 22:16:00,165 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-13 22:16:00,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:00,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:00,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,180 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:00,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:00,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:00,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:00,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287760191, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:00,192 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:00,194 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:00,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,195 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:00,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-13 22:16:00,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:00,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup oldGroup 2023-07-13 22:16:00,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:00,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 22:16:00,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to default 2023-07-13 22:16:00,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-13 22:16:00,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-13 22:16:00,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-13 22:16:00,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-13 22:16:00,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 22:16:00,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:00,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39325] to rsgroup anotherRSGroup 2023-07-13 22:16:00,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 22:16:00,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:00,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 22:16:00,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39325,1689286540864] are moved back to default 2023-07-13 22:16:00,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-13 22:16:00,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-13 22:16:00,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-13 22:16:00,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-13 22:16:00,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:59834 deadline: 1689287760262, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-13 22:16:00,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-13 22:16:00,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:59834 deadline: 1689287760264, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-13 22:16:00,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-13 22:16:00,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:59834 deadline: 1689287760265, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-13 22:16:00,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-13 22:16:00,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:59834 deadline: 1689287760266, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-13 22:16:00,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39325] to rsgroup default 2023-07-13 22:16:00,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 22:16:00,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:00,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-13 22:16:00,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39325,1689286540864] are moved back to anotherRSGroup 2023-07-13 22:16:00,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-13 22:16:00,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-13 22:16:00,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 22:16:00,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup default 2023-07-13 22:16:00,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 22:16:00,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:00,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-13 22:16:00,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to oldGroup 2023-07-13 22:16:00,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-13 22:16:00,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-13 22:16:00,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:16:00,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:00,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:00,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:00,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,309 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:00,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:00,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:00,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:00,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287760321, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:00,321 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:00,323 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:00,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,324 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:00,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,343 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=516 (was 512) Potentially hanging thread: hconnection-0x122baacf-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=788 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=387 (was 387), ProcessCount=172 (was 172), AvailableMemoryMB=3973 (was 3975) 2023-07-13 22:16:00,343 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-13 22:16:00,364 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=516, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=387, ProcessCount=172, AvailableMemoryMB=3972 2023-07-13 22:16:00,364 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-13 22:16:00,364 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-13 22:16:00,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:00,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:00,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:00,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:00,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:00,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:00,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:00,386 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:00,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:00,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:00,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:00,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:00,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287760405, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:00,406 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:00,408 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:00,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,419 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:00,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:00,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-13 22:16:00,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:00,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:00,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:00,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup oldgroup 2023-07-13 22:16:00,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:00,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:00,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 22:16:00,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to default 2023-07-13 22:16:00,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-13 22:16:00,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:00,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:00,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:00,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-13 22:16:00,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:00,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:00,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-13 22:16:00,463 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:00,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-13 22:16:00,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 22:16:00,465 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:00,466 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:00,466 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:00,467 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:00,470 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:00,471 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,472 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 empty. 2023-07-13 22:16:00,473 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,473 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-13 22:16:00,523 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:00,525 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1f5d6fab0cf581ad20cfb9da5c269389, NAME => 'testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:00,553 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:00,553 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 1f5d6fab0cf581ad20cfb9da5c269389, disabling compactions & flushes 2023-07-13 22:16:00,553 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,554 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,554 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. after waiting 0 ms 2023-07-13 22:16:00,554 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,554 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,554 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:00,560 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:00,561 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286560561"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286560561"}]},"ts":"1689286560561"} 2023-07-13 22:16:00,563 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:00,564 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:00,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286560564"}]},"ts":"1689286560564"} 2023-07-13 22:16:00,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 22:16:00,566 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-13 22:16:00,572 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:00,572 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:00,572 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:00,573 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:00,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, ASSIGN}] 2023-07-13 22:16:00,576 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, ASSIGN 2023-07-13 22:16:00,576 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:16:00,727 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:00,728 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:00,728 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286560728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286560728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286560728"}]},"ts":"1689286560728"} 2023-07-13 22:16:00,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:00,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 22:16:00,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1f5d6fab0cf581ad20cfb9da5c269389, NAME => 'testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:00,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:00,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,888 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,890 DEBUG [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/tr 2023-07-13 22:16:00,890 DEBUG [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/tr 2023-07-13 22:16:00,890 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1f5d6fab0cf581ad20cfb9da5c269389 columnFamilyName tr 2023-07-13 22:16:00,891 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] regionserver.HStore(310): Store=1f5d6fab0cf581ad20cfb9da5c269389/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:00,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:00,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:00,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1f5d6fab0cf581ad20cfb9da5c269389; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9783779840, jitterRate=-0.08881449699401855}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:00,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:00,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389., pid=116, masterSystemTime=1689286560881 2023-07-13 22:16:00,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:00,908 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:00,908 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286560908"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286560908"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286560908"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286560908"}]},"ts":"1689286560908"} 2023-07-13 22:16:00,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-13 22:16:00,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39325,1689286540864 in 180 msec 2023-07-13 22:16:00,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-13 22:16:00,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, ASSIGN in 339 msec 2023-07-13 22:16:00,917 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:00,917 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286560917"}]},"ts":"1689286560917"} 2023-07-13 22:16:00,919 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-13 22:16:00,922 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:00,924 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 462 msec 2023-07-13 22:16:01,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 22:16:01,068 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-13 22:16:01,069 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-13 22:16:01,069 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:01,076 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-13 22:16:01,076 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:01,076 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-13 22:16:01,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-13 22:16:01,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:01,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:01,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:01,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:01,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-13 22:16:01,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 1f5d6fab0cf581ad20cfb9da5c269389 to RSGroup oldgroup 2023-07-13 22:16:01,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:01,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:01,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:01,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:01,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:01,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE 2023-07-13 22:16:01,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-13 22:16:01,088 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE 2023-07-13 22:16:01,089 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:01,089 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286561089"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286561089"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286561089"}]},"ts":"1689286561089"} 2023-07-13 22:16:01,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:01,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1f5d6fab0cf581ad20cfb9da5c269389, disabling compactions & flushes 2023-07-13 22:16:01,246 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. after waiting 0 ms 2023-07-13 22:16:01,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,254 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-13 22:16:01,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:01,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:01,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1f5d6fab0cf581ad20cfb9da5c269389 move to jenkins-hbase4.apache.org,39109,1689286541053 record at close sequenceid=2 2023-07-13 22:16:01,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,260 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=CLOSED 2023-07-13 22:16:01,261 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286561260"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286561260"}]},"ts":"1689286561260"} 2023-07-13 22:16:01,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-13 22:16:01,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39325,1689286540864 in 172 msec 2023-07-13 22:16:01,266 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39109,1689286541053; forceNewPlan=false, retain=false 2023-07-13 22:16:01,416 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:01,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:01,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286561417"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286561417"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286561417"}]},"ts":"1689286561417"} 2023-07-13 22:16:01,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:16:01,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1f5d6fab0cf581ad20cfb9da5c269389, NAME => 'testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:01,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:01,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,578 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,579 DEBUG [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/tr 2023-07-13 22:16:01,579 DEBUG [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/tr 2023-07-13 22:16:01,579 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1f5d6fab0cf581ad20cfb9da5c269389 columnFamilyName tr 2023-07-13 22:16:01,580 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] regionserver.HStore(310): Store=1f5d6fab0cf581ad20cfb9da5c269389/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:01,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:01,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1f5d6fab0cf581ad20cfb9da5c269389; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10207627360, jitterRate=-0.04934062063694}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:01,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:01,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389., pid=119, masterSystemTime=1689286561572 2023-07-13 22:16:01,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:01,588 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:01,588 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286561588"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286561588"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286561588"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286561588"}]},"ts":"1689286561588"} 2023-07-13 22:16:01,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-13 22:16:01,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39109,1689286541053 in 171 msec 2023-07-13 22:16:01,593 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE in 506 msec 2023-07-13 22:16:02,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-13 22:16:02,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-13 22:16:02,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:02,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:02,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:02,096 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:02,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-13 22:16:02,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:02,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-13 22:16:02,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:02,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-13 22:16:02,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:02,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:02,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:02,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-13 22:16:02,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:02,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:02,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:02,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:02,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:02,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:02,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:02,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:02,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39325] to rsgroup normal 2023-07-13 22:16:02,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:02,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:02,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:02,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:02,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:02,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 22:16:02,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39325,1689286540864] are moved back to default 2023-07-13 22:16:02,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-13 22:16:02,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:02,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:02,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:02,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-13 22:16:02,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:02,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:02,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-13 22:16:02,138 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:02,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-13 22:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 22:16:02,140 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:02,141 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:02,141 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:02,141 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:02,142 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:02,144 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:02,145 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 empty. 2023-07-13 22:16:02,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,146 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-13 22:16:02,162 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:02,163 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6412afb39fb47dcabd5281695612c837, NAME => 'unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:02,175 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:02,175 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 6412afb39fb47dcabd5281695612c837, disabling compactions & flushes 2023-07-13 22:16:02,175 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,175 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,176 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. after waiting 0 ms 2023-07-13 22:16:02,176 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,176 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,176 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:02,178 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:02,178 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286562178"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286562178"}]},"ts":"1689286562178"} 2023-07-13 22:16:02,182 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:02,183 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:02,183 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286562183"}]},"ts":"1689286562183"} 2023-07-13 22:16:02,184 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-13 22:16:02,188 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, ASSIGN}] 2023-07-13 22:16:02,190 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, ASSIGN 2023-07-13 22:16:02,190 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:16:02,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 22:16:02,342 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:02,342 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286562342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286562342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286562342"}]},"ts":"1689286562342"} 2023-07-13 22:16:02,343 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:02,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 22:16:02,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6412afb39fb47dcabd5281695612c837, NAME => 'unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:02,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:02,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,511 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,512 DEBUG [StoreOpener-6412afb39fb47dcabd5281695612c837-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/ut 2023-07-13 22:16:02,512 DEBUG [StoreOpener-6412afb39fb47dcabd5281695612c837-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/ut 2023-07-13 22:16:02,512 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6412afb39fb47dcabd5281695612c837 columnFamilyName ut 2023-07-13 22:16:02,513 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] regionserver.HStore(310): Store=6412afb39fb47dcabd5281695612c837/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:02,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:02,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6412afb39fb47dcabd5281695612c837; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10913852160, jitterRate=0.016431689262390137}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:02,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:02,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837., pid=122, masterSystemTime=1689286562495 2023-07-13 22:16:02,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,539 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:02,539 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286562539"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286562539"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286562539"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286562539"}]},"ts":"1689286562539"} 2023-07-13 22:16:02,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-13 22:16:02,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,43571,1689286544760 in 198 msec 2023-07-13 22:16:02,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-13 22:16:02,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, ASSIGN in 354 msec 2023-07-13 22:16:02,545 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:02,545 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286562545"}]},"ts":"1689286562545"} 2023-07-13 22:16:02,546 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-13 22:16:02,548 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:02,550 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 413 msec 2023-07-13 22:16:02,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 22:16:02,743 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-13 22:16:02,743 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-13 22:16:02,743 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:02,760 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-13 22:16:02,761 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:02,761 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-13 22:16:02,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-13 22:16:02,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 22:16:02,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:02,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:02,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:02,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:02,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-13 22:16:02,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 6412afb39fb47dcabd5281695612c837 to RSGroup normal 2023-07-13 22:16:02,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE 2023-07-13 22:16:02,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-13 22:16:02,786 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE 2023-07-13 22:16:02,787 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:02,788 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286562787"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286562787"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286562787"}]},"ts":"1689286562787"} 2023-07-13 22:16:02,789 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:02,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6412afb39fb47dcabd5281695612c837, disabling compactions & flushes 2023-07-13 22:16:02,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. after waiting 0 ms 2023-07-13 22:16:02,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:02,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:02,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:02,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6412afb39fb47dcabd5281695612c837 move to jenkins-hbase4.apache.org,39325,1689286540864 record at close sequenceid=2 2023-07-13 22:16:02,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:02,950 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=CLOSED 2023-07-13 22:16:02,950 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286562950"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286562950"}]},"ts":"1689286562950"} 2023-07-13 22:16:02,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-13 22:16:02,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,43571,1689286544760 in 162 msec 2023-07-13 22:16:02,954 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:16:03,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:03,104 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286563104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286563104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286563104"}]},"ts":"1689286563104"} 2023-07-13 22:16:03,106 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:03,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6412afb39fb47dcabd5281695612c837, NAME => 'unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:03,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:03,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,264 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,265 DEBUG [StoreOpener-6412afb39fb47dcabd5281695612c837-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/ut 2023-07-13 22:16:03,265 DEBUG [StoreOpener-6412afb39fb47dcabd5281695612c837-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/ut 2023-07-13 22:16:03,265 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6412afb39fb47dcabd5281695612c837 columnFamilyName ut 2023-07-13 22:16:03,266 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] regionserver.HStore(310): Store=6412afb39fb47dcabd5281695612c837/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:03,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6412afb39fb47dcabd5281695612c837; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10624677280, jitterRate=-0.010499820113182068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:03,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:03,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837., pid=125, masterSystemTime=1689286563258 2023-07-13 22:16:03,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,277 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:03,278 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286563277"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286563277"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286563277"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286563277"}]},"ts":"1689286563277"} 2023-07-13 22:16:03,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-13 22:16:03,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,39325,1689286540864 in 173 msec 2023-07-13 22:16:03,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE in 497 msec 2023-07-13 22:16:03,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-13 22:16:03,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-13 22:16:03,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:03,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:03,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:03,793 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 22:16:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-13 22:16:03,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:03,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 22:16:03,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:03,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-13 22:16:03,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:03,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:03,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:03,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:03,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-13 22:16:03,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-13 22:16:03,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:03,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:03,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-13 22:16:03,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:03,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-13 22:16:03,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:03,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 22:16:03,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:03,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:03,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:03,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-13 22:16:03,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:03,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:03,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:03,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:03,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:03,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-13 22:16:03,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 6412afb39fb47dcabd5281695612c837 to RSGroup default 2023-07-13 22:16:03,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE 2023-07-13 22:16:03,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 22:16:03,825 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE 2023-07-13 22:16:03,826 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:03,826 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286563826"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286563826"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286563826"}]},"ts":"1689286563826"} 2023-07-13 22:16:03,827 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:03,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6412afb39fb47dcabd5281695612c837, disabling compactions & flushes 2023-07-13 22:16:03,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. after waiting 0 ms 2023-07-13 22:16:03,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:16:03,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:03,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:03,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6412afb39fb47dcabd5281695612c837 move to jenkins-hbase4.apache.org,43571,1689286544760 record at close sequenceid=5 2023-07-13 22:16:03,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:03,991 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=CLOSED 2023-07-13 22:16:03,992 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286563991"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286563991"}]},"ts":"1689286563991"} 2023-07-13 22:16:04,014 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-13 22:16:04,014 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,39325,1689286540864 in 176 msec 2023-07-13 22:16:04,015 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:16:04,166 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:04,166 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286564166"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286564166"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286564166"}]},"ts":"1689286564166"} 2023-07-13 22:16:04,168 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:04,324 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:04,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6412afb39fb47dcabd5281695612c837, NAME => 'unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:04,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:04,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,326 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,328 DEBUG [StoreOpener-6412afb39fb47dcabd5281695612c837-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/ut 2023-07-13 22:16:04,328 DEBUG [StoreOpener-6412afb39fb47dcabd5281695612c837-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/ut 2023-07-13 22:16:04,328 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6412afb39fb47dcabd5281695612c837 columnFamilyName ut 2023-07-13 22:16:04,329 INFO [StoreOpener-6412afb39fb47dcabd5281695612c837-1] regionserver.HStore(310): Store=6412afb39fb47dcabd5281695612c837/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:04,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:04,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6412afb39fb47dcabd5281695612c837; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10426459360, jitterRate=-0.028960302472114563}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:04,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:04,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837., pid=128, masterSystemTime=1689286564319 2023-07-13 22:16:04,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:04,337 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:04,338 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6412afb39fb47dcabd5281695612c837, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:04,338 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689286564338"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286564338"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286564338"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286564338"}]},"ts":"1689286564338"} 2023-07-13 22:16:04,341 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-13 22:16:04,341 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 6412afb39fb47dcabd5281695612c837, server=jenkins-hbase4.apache.org,43571,1689286544760 in 172 msec 2023-07-13 22:16:04,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=6412afb39fb47dcabd5281695612c837, REOPEN/MOVE in 516 msec 2023-07-13 22:16:04,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-13 22:16:04,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-13 22:16:04,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:04,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39325] to rsgroup default 2023-07-13 22:16:04,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 22:16:04,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:04,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:04,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:04,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:04,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-13 22:16:04,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39325,1689286540864] are moved back to normal 2023-07-13 22:16:04,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-13 22:16:04,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:04,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-13 22:16:04,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:04,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:04,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:04,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 22:16:04,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:04,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:04,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:04,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:04,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:04,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:04,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:16:04,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:04,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-13 22:16:04,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:04,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:04,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:04,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-13 22:16:04,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(345): Moving region 1f5d6fab0cf581ad20cfb9da5c269389 to RSGroup default 2023-07-13 22:16:04,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE 2023-07-13 22:16:04,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 22:16:04,855 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE 2023-07-13 22:16:04,856 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:04,856 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286564856"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286564856"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286564856"}]},"ts":"1689286564856"} 2023-07-13 22:16:04,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39109,1689286541053}] 2023-07-13 22:16:04,919 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 22:16:05,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1f5d6fab0cf581ad20cfb9da5c269389, disabling compactions & flushes 2023-07-13 22:16:05,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. after waiting 0 ms 2023-07-13 22:16:05,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 22:16:05,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:05,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1f5d6fab0cf581ad20cfb9da5c269389 move to jenkins-hbase4.apache.org,39325,1689286540864 record at close sequenceid=5 2023-07-13 22:16:05,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,021 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=CLOSED 2023-07-13 22:16:05,021 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286565021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286565021"}]},"ts":"1689286565021"} 2023-07-13 22:16:05,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-13 22:16:05,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39109,1689286541053 in 166 msec 2023-07-13 22:16:05,026 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:16:05,176 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:05,177 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:05,177 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286565176"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286565176"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286565176"}]},"ts":"1689286565176"} 2023-07-13 22:16:05,182 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:05,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1f5d6fab0cf581ad20cfb9da5c269389, NAME => 'testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:05,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:05,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,342 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,343 DEBUG [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/tr 2023-07-13 22:16:05,343 DEBUG [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/tr 2023-07-13 22:16:05,344 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1f5d6fab0cf581ad20cfb9da5c269389 columnFamilyName tr 2023-07-13 22:16:05,344 INFO [StoreOpener-1f5d6fab0cf581ad20cfb9da5c269389-1] regionserver.HStore(310): Store=1f5d6fab0cf581ad20cfb9da5c269389/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:05,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:05,350 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1f5d6fab0cf581ad20cfb9da5c269389; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9851588800, jitterRate=-0.08249929547309875}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:05,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:05,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389., pid=131, masterSystemTime=1689286565336 2023-07-13 22:16:05,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:05,353 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1f5d6fab0cf581ad20cfb9da5c269389, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:05,353 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689286565353"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286565353"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286565353"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286565353"}]},"ts":"1689286565353"} 2023-07-13 22:16:05,357 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-13 22:16:05,357 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 1f5d6fab0cf581ad20cfb9da5c269389, server=jenkins-hbase4.apache.org,39325,1689286540864 in 174 msec 2023-07-13 22:16:05,359 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=1f5d6fab0cf581ad20cfb9da5c269389, REOPEN/MOVE in 504 msec 2023-07-13 22:16:05,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-13 22:16:05,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-13 22:16:05,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:05,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup default 2023-07-13 22:16:05,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:05,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 22:16:05,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:05,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-13 22:16:05,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to newgroup 2023-07-13 22:16:05,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-13 22:16:05,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:05,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-13 22:16:05,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:05,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:05,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:05,874 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:05,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:05,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:05,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:05,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:05,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:05,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:05,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:05,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:05,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:05,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287765897, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:05,898 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:05,903 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:05,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:05,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:05,904 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:05,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:05,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:05,931 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=509 (was 516), OpenFileDescriptor=770 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 387), ProcessCount=171 (was 172), AvailableMemoryMB=3786 (was 3972) 2023-07-13 22:16:05,931 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-13 22:16:05,953 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=509, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=172, AvailableMemoryMB=3786 2023-07-13 22:16:05,953 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-13 22:16:05,953 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-13 22:16:05,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:05,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:05,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:05,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:05,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:05,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:05,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:05,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:05,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:05,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:05,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:05,974 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:05,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:05,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:05,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:05,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:05,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:05,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:05,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:05,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:05,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:05,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287765985, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:05,985 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:05,987 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:05,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:05,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:05,988 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:05,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:05,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:05,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-13 22:16:05,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:05,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-13 22:16:05,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-13 22:16:05,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-13 22:16:05,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:05,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-13 22:16:05,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:05,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:59834 deadline: 1689287765996, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-13 22:16:05,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-13 22:16:05,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:05,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:59834 deadline: 1689287765999, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-13 22:16:06,002 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-13 22:16:06,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-13 22:16:06,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-13 22:16:06,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:06,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:59834 deadline: 1689287766015, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-13 22:16:06,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:06,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:06,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:06,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:06,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:06,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:06,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:06,030 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:06,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:06,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:06,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:06,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:06,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:06,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:06,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287766047, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:06,051 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:06,052 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:06,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,053 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:06,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:06,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:06,075 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 509) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6e0626cd-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=770 (was 770), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 381), ProcessCount=172 (was 172), AvailableMemoryMB=3786 (was 3786) 2023-07-13 22:16:06,075 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 22:16:06,094 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=172, AvailableMemoryMB=3785 2023-07-13 22:16:06,094 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 22:16:06,094 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-13 22:16:06,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:06,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:06,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:06,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:06,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:06,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:06,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:06,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:06,115 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:06,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:06,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:06,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:06,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:06,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:06,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:06,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287766130, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:06,131 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:06,133 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:06,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,134 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:06,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:06,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:06,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:06,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:06,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:06,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:06,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:06,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:06,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 22:16:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to default 2023-07-13 22:16:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:06,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:06,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:06,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:06,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:06,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:06,177 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:06,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-13 22:16:06,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 22:16:06,179 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1207611293 2023-07-13 22:16:06,179 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:06,180 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:06,180 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:06,182 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:06,187 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,187 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,187 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,188 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,188 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,188 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af empty. 2023-07-13 22:16:06,188 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 empty. 2023-07-13 22:16:06,188 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f empty. 2023-07-13 22:16:06,189 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,189 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 empty. 2023-07-13 22:16:06,189 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,189 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,191 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 empty. 2023-07-13 22:16:06,191 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,191 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,191 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-13 22:16:06,219 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:06,221 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3763c6e9f845ad3a15a3fa4a06fc6cd6, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:06,221 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 87ec01817911902b3a40573b3c1438af, NAME => 'Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:06,221 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => c66670b1b34bf1183d02595bb500937f, NAME => 'Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:06,253 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,253 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,253 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 3763c6e9f845ad3a15a3fa4a06fc6cd6, disabling compactions & flushes 2023-07-13 22:16:06,253 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 87ec01817911902b3a40573b3c1438af, disabling compactions & flushes 2023-07-13 22:16:06,253 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,253 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. after waiting 0 ms 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. after waiting 0 ms 2023-07-13 22:16:06,254 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 87ec01817911902b3a40573b3c1438af: 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,254 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,254 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 3763c6e9f845ad3a15a3fa4a06fc6cd6: 2023-07-13 22:16:06,255 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5fdb2f1d78b0e82bfa5ce8615607ccc0, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:06,255 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => fc5a5d34a0ad212081d1ccf4f884a4f1, NAME => 'Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp 2023-07-13 22:16:06,256 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,257 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing c66670b1b34bf1183d02595bb500937f, disabling compactions & flushes 2023-07-13 22:16:06,257 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,257 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,257 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. after waiting 0 ms 2023-07-13 22:16:06,257 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,257 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,257 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for c66670b1b34bf1183d02595bb500937f: 2023-07-13 22:16:06,266 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 5fdb2f1d78b0e82bfa5ce8615607ccc0, disabling compactions & flushes 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,267 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing fc5a5d34a0ad212081d1ccf4f884a4f1, disabling compactions & flushes 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. after waiting 0 ms 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,267 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 5fdb2f1d78b0e82bfa5ce8615607ccc0: 2023-07-13 22:16:06,267 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. after waiting 0 ms 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,267 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for fc5a5d34a0ad212081d1ccf4f884a4f1: 2023-07-13 22:16:06,269 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:06,270 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566270"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566270"}]},"ts":"1689286566270"} 2023-07-13 22:16:06,271 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566270"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566270"}]},"ts":"1689286566270"} 2023-07-13 22:16:06,271 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566270"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566270"}]},"ts":"1689286566270"} 2023-07-13 22:16:06,271 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566270"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566270"}]},"ts":"1689286566270"} 2023-07-13 22:16:06,271 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566270"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566270"}]},"ts":"1689286566270"} 2023-07-13 22:16:06,273 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 22:16:06,274 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:06,274 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286566274"}]},"ts":"1689286566274"} 2023-07-13 22:16:06,275 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-13 22:16:06,278 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:06,279 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:06,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 22:16:06,279 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:06,279 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:06,279 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, ASSIGN}] 2023-07-13 22:16:06,280 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, ASSIGN 2023-07-13 22:16:06,280 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, ASSIGN 2023-07-13 22:16:06,280 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, ASSIGN 2023-07-13 22:16:06,280 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, ASSIGN 2023-07-13 22:16:06,281 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:16:06,281 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:16:06,281 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, ASSIGN 2023-07-13 22:16:06,281 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:16:06,281 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43571,1689286544760; forceNewPlan=false, retain=false 2023-07-13 22:16:06,282 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39325,1689286540864; forceNewPlan=false, retain=false 2023-07-13 22:16:06,431 INFO [jenkins-hbase4:34777] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 22:16:06,435 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=fc5a5d34a0ad212081d1ccf4f884a4f1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,435 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=87ec01817911902b3a40573b3c1438af, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,435 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=3763c6e9f845ad3a15a3fa4a06fc6cd6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:06,435 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=c66670b1b34bf1183d02595bb500937f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,435 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=5fdb2f1d78b0e82bfa5ce8615607ccc0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:06,435 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566435"}]},"ts":"1689286566435"} 2023-07-13 22:16:06,435 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566435"}]},"ts":"1689286566435"} 2023-07-13 22:16:06,435 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566435"}]},"ts":"1689286566435"} 2023-07-13 22:16:06,435 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566435"}]},"ts":"1689286566435"} 2023-07-13 22:16:06,435 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566435"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566435"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566435"}]},"ts":"1689286566435"} 2023-07-13 22:16:06,437 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure c66670b1b34bf1183d02595bb500937f, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:06,438 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=135, state=RUNNABLE; OpenRegionProcedure 3763c6e9f845ad3a15a3fa4a06fc6cd6, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:06,439 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=137, state=RUNNABLE; OpenRegionProcedure fc5a5d34a0ad212081d1ccf4f884a4f1, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:06,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=133, state=RUNNABLE; OpenRegionProcedure 87ec01817911902b3a40573b3c1438af, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:06,441 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure 5fdb2f1d78b0e82bfa5ce8615607ccc0, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:06,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 22:16:06,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5fdb2f1d78b0e82bfa5ce8615607ccc0, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 22:16:06,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc5a5d34a0ad212081d1ccf4f884a4f1, NAME => 'Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,596 INFO [StoreOpener-5fdb2f1d78b0e82bfa5ce8615607ccc0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,597 INFO [StoreOpener-fc5a5d34a0ad212081d1ccf4f884a4f1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,598 DEBUG [StoreOpener-5fdb2f1d78b0e82bfa5ce8615607ccc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/f 2023-07-13 22:16:06,598 DEBUG [StoreOpener-5fdb2f1d78b0e82bfa5ce8615607ccc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/f 2023-07-13 22:16:06,598 DEBUG [StoreOpener-fc5a5d34a0ad212081d1ccf4f884a4f1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/f 2023-07-13 22:16:06,599 DEBUG [StoreOpener-fc5a5d34a0ad212081d1ccf4f884a4f1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/f 2023-07-13 22:16:06,599 INFO [StoreOpener-5fdb2f1d78b0e82bfa5ce8615607ccc0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5fdb2f1d78b0e82bfa5ce8615607ccc0 columnFamilyName f 2023-07-13 22:16:06,599 INFO [StoreOpener-fc5a5d34a0ad212081d1ccf4f884a4f1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc5a5d34a0ad212081d1ccf4f884a4f1 columnFamilyName f 2023-07-13 22:16:06,599 INFO [StoreOpener-fc5a5d34a0ad212081d1ccf4f884a4f1-1] regionserver.HStore(310): Store=fc5a5d34a0ad212081d1ccf4f884a4f1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:06,599 INFO [StoreOpener-5fdb2f1d78b0e82bfa5ce8615607ccc0-1] regionserver.HStore(310): Store=5fdb2f1d78b0e82bfa5ce8615607ccc0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:06,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:06,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:06,608 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fc5a5d34a0ad212081d1ccf4f884a4f1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11923178720, jitterRate=0.11043255031108856}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:06,608 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5fdb2f1d78b0e82bfa5ce8615607ccc0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11120114080, jitterRate=0.03564132750034332}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:06,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fc5a5d34a0ad212081d1ccf4f884a4f1: 2023-07-13 22:16:06,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5fdb2f1d78b0e82bfa5ce8615607ccc0: 2023-07-13 22:16:06,609 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1., pid=140, masterSystemTime=1689286566590 2023-07-13 22:16:06,609 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0., pid=142, masterSystemTime=1689286566590 2023-07-13 22:16:06,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,611 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=5fdb2f1d78b0e82bfa5ce8615607ccc0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:06,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3763c6e9f845ad3a15a3fa4a06fc6cd6, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 22:16:06,611 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566611"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286566611"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286566611"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286566611"}]},"ts":"1689286566611"} 2023-07-13 22:16:06,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-13 22:16:06,612 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=fc5a5d34a0ad212081d1ccf4f884a4f1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,615 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure 5fdb2f1d78b0e82bfa5ce8615607ccc0, server=jenkins-hbase4.apache.org,43571,1689286544760 in 171 msec 2023-07-13 22:16:06,615 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566612"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286566612"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286566612"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286566612"}]},"ts":"1689286566612"} 2023-07-13 22:16:06,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 87ec01817911902b3a40573b3c1438af, NAME => 'Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 22:16:06,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, ASSIGN in 335 msec 2023-07-13 22:16:06,617 INFO [StoreOpener-3763c6e9f845ad3a15a3fa4a06fc6cd6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,618 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-13 22:16:06,618 INFO [StoreOpener-87ec01817911902b3a40573b3c1438af-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,618 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; OpenRegionProcedure fc5a5d34a0ad212081d1ccf4f884a4f1, server=jenkins-hbase4.apache.org,39325,1689286540864 in 177 msec 2023-07-13 22:16:06,619 DEBUG [StoreOpener-3763c6e9f845ad3a15a3fa4a06fc6cd6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/f 2023-07-13 22:16:06,619 DEBUG [StoreOpener-3763c6e9f845ad3a15a3fa4a06fc6cd6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/f 2023-07-13 22:16:06,619 INFO [StoreOpener-3763c6e9f845ad3a15a3fa4a06fc6cd6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3763c6e9f845ad3a15a3fa4a06fc6cd6 columnFamilyName f 2023-07-13 22:16:06,620 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, ASSIGN in 339 msec 2023-07-13 22:16:06,620 DEBUG [StoreOpener-87ec01817911902b3a40573b3c1438af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/f 2023-07-13 22:16:06,620 DEBUG [StoreOpener-87ec01817911902b3a40573b3c1438af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/f 2023-07-13 22:16:06,620 INFO [StoreOpener-3763c6e9f845ad3a15a3fa4a06fc6cd6-1] regionserver.HStore(310): Store=3763c6e9f845ad3a15a3fa4a06fc6cd6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:06,620 INFO [StoreOpener-87ec01817911902b3a40573b3c1438af-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 87ec01817911902b3a40573b3c1438af columnFamilyName f 2023-07-13 22:16:06,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,621 INFO [StoreOpener-87ec01817911902b3a40573b3c1438af-1] regionserver.HStore(310): Store=87ec01817911902b3a40573b3c1438af/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:06,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:06,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:06,627 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3763c6e9f845ad3a15a3fa4a06fc6cd6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11611944800, jitterRate=0.08144663274288177}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:06,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3763c6e9f845ad3a15a3fa4a06fc6cd6: 2023-07-13 22:16:06,627 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 87ec01817911902b3a40573b3c1438af; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10777873920, jitterRate=0.003767728805541992}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:06,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 87ec01817911902b3a40573b3c1438af: 2023-07-13 22:16:06,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af., pid=141, masterSystemTime=1689286566590 2023-07-13 22:16:06,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6., pid=139, masterSystemTime=1689286566590 2023-07-13 22:16:06,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c66670b1b34bf1183d02595bb500937f, NAME => 'Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 22:16:06,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:06,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,629 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=87ec01817911902b3a40573b3c1438af, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,630 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=3763c6e9f845ad3a15a3fa4a06fc6cd6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:06,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,630 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566630"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286566630"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286566630"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286566630"}]},"ts":"1689286566630"} 2023-07-13 22:16:06,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,630 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566629"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286566629"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286566629"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286566629"}]},"ts":"1689286566629"} 2023-07-13 22:16:06,632 INFO [StoreOpener-c66670b1b34bf1183d02595bb500937f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,633 DEBUG [StoreOpener-c66670b1b34bf1183d02595bb500937f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/f 2023-07-13 22:16:06,633 DEBUG [StoreOpener-c66670b1b34bf1183d02595bb500937f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/f 2023-07-13 22:16:06,634 INFO [StoreOpener-c66670b1b34bf1183d02595bb500937f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c66670b1b34bf1183d02595bb500937f columnFamilyName f 2023-07-13 22:16:06,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-13 22:16:06,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; OpenRegionProcedure 3763c6e9f845ad3a15a3fa4a06fc6cd6, server=jenkins-hbase4.apache.org,43571,1689286544760 in 193 msec 2023-07-13 22:16:06,635 INFO [StoreOpener-c66670b1b34bf1183d02595bb500937f-1] regionserver.HStore(310): Store=c66670b1b34bf1183d02595bb500937f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:06,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=133 2023-07-13 22:16:06,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, ASSIGN in 355 msec 2023-07-13 22:16:06,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=133, state=SUCCESS; OpenRegionProcedure 87ec01817911902b3a40573b3c1438af, server=jenkins-hbase4.apache.org,39325,1689286540864 in 193 msec 2023-07-13 22:16:06,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,639 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, ASSIGN in 357 msec 2023-07-13 22:16:06,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:06,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c66670b1b34bf1183d02595bb500937f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11948534560, jitterRate=0.11279399693012238}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:06,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c66670b1b34bf1183d02595bb500937f: 2023-07-13 22:16:06,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f., pid=138, masterSystemTime=1689286566590 2023-07-13 22:16:06,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,652 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=c66670b1b34bf1183d02595bb500937f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,653 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566652"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286566652"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286566652"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286566652"}]},"ts":"1689286566652"} 2023-07-13 22:16:06,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-13 22:16:06,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure c66670b1b34bf1183d02595bb500937f, server=jenkins-hbase4.apache.org,39325,1689286540864 in 217 msec 2023-07-13 22:16:06,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-13 22:16:06,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, ASSIGN in 377 msec 2023-07-13 22:16:06,659 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:06,659 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286566659"}]},"ts":"1689286566659"} 2023-07-13 22:16:06,661 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-13 22:16:06,663 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:06,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 490 msec 2023-07-13 22:16:06,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 22:16:06,782 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-13 22:16:06,782 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-13 22:16:06,782 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:06,787 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-13 22:16:06,787 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:06,787 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-13 22:16:06,788 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:06,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-13 22:16:06,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:06,795 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-13 22:16:06,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-13 22:16:06,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:06,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-13 22:16:06,800 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286566800"}]},"ts":"1689286566800"} 2023-07-13 22:16:06,801 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-13 22:16:06,803 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-13 22:16:06,804 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, UNASSIGN}] 2023-07-13 22:16:06,804 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, UNASSIGN 2023-07-13 22:16:06,805 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, UNASSIGN 2023-07-13 22:16:06,805 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, UNASSIGN 2023-07-13 22:16:06,805 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, UNASSIGN 2023-07-13 22:16:06,805 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, UNASSIGN 2023-07-13 22:16:06,806 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=5fdb2f1d78b0e82bfa5ce8615607ccc0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:06,806 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=3763c6e9f845ad3a15a3fa4a06fc6cd6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:06,806 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=87ec01817911902b3a40573b3c1438af, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,806 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=fc5a5d34a0ad212081d1ccf4f884a4f1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,806 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566806"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566806"}]},"ts":"1689286566806"} 2023-07-13 22:16:06,806 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566806"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566806"}]},"ts":"1689286566806"} 2023-07-13 22:16:06,806 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566806"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566806"}]},"ts":"1689286566806"} 2023-07-13 22:16:06,806 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=c66670b1b34bf1183d02595bb500937f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:06,806 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566806"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566806"}]},"ts":"1689286566806"} 2023-07-13 22:16:06,807 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566806"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286566806"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286566806"}]},"ts":"1689286566806"} 2023-07-13 22:16:06,808 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=144, state=RUNNABLE; CloseRegionProcedure 87ec01817911902b3a40573b3c1438af, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:06,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=148, state=RUNNABLE; CloseRegionProcedure fc5a5d34a0ad212081d1ccf4f884a4f1, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:06,809 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=147, state=RUNNABLE; CloseRegionProcedure 5fdb2f1d78b0e82bfa5ce8615607ccc0, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:06,809 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=146, state=RUNNABLE; CloseRegionProcedure 3763c6e9f845ad3a15a3fa4a06fc6cd6, server=jenkins-hbase4.apache.org,43571,1689286544760}] 2023-07-13 22:16:06,810 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=145, state=RUNNABLE; CloseRegionProcedure c66670b1b34bf1183d02595bb500937f, server=jenkins-hbase4.apache.org,39325,1689286540864}] 2023-07-13 22:16:06,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-13 22:16:06,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c66670b1b34bf1183d02595bb500937f, disabling compactions & flushes 2023-07-13 22:16:06,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3763c6e9f845ad3a15a3fa4a06fc6cd6, disabling compactions & flushes 2023-07-13 22:16:06,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. after waiting 0 ms 2023-07-13 22:16:06,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. after waiting 0 ms 2023-07-13 22:16:06,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:06,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:06,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f. 2023-07-13 22:16:06,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6. 2023-07-13 22:16:06,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c66670b1b34bf1183d02595bb500937f: 2023-07-13 22:16:06,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3763c6e9f845ad3a15a3fa4a06fc6cd6: 2023-07-13 22:16:06,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:06,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fc5a5d34a0ad212081d1ccf4f884a4f1, disabling compactions & flushes 2023-07-13 22:16:06,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. after waiting 0 ms 2023-07-13 22:16:06,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,969 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=c66670b1b34bf1183d02595bb500937f, regionState=CLOSED 2023-07-13 22:16:06,969 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566969"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566969"}]},"ts":"1689286566969"} 2023-07-13 22:16:06,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:06,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5fdb2f1d78b0e82bfa5ce8615607ccc0, disabling compactions & flushes 2023-07-13 22:16:06,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. after waiting 0 ms 2023-07-13 22:16:06,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,971 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=3763c6e9f845ad3a15a3fa4a06fc6cd6, regionState=CLOSED 2023-07-13 22:16:06,971 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566971"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566971"}]},"ts":"1689286566971"} 2023-07-13 22:16:06,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=145 2023-07-13 22:16:06,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=145, state=SUCCESS; CloseRegionProcedure c66670b1b34bf1183d02595bb500937f, server=jenkins-hbase4.apache.org,39325,1689286540864 in 162 msec 2023-07-13 22:16:06,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:06,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1. 2023-07-13 22:16:06,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=146 2023-07-13 22:16:06,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:06,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=146, state=SUCCESS; CloseRegionProcedure 3763c6e9f845ad3a15a3fa4a06fc6cd6, server=jenkins-hbase4.apache.org,43571,1689286544760 in 163 msec 2023-07-13 22:16:06,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fc5a5d34a0ad212081d1ccf4f884a4f1: 2023-07-13 22:16:06,975 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c66670b1b34bf1183d02595bb500937f, UNASSIGN in 170 msec 2023-07-13 22:16:06,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0. 2023-07-13 22:16:06,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5fdb2f1d78b0e82bfa5ce8615607ccc0: 2023-07-13 22:16:06,975 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3763c6e9f845ad3a15a3fa4a06fc6cd6, UNASSIGN in 170 msec 2023-07-13 22:16:06,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:06,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 87ec01817911902b3a40573b3c1438af, disabling compactions & flushes 2023-07-13 22:16:06,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. after waiting 0 ms 2023-07-13 22:16:06,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,977 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=fc5a5d34a0ad212081d1ccf4f884a4f1, regionState=CLOSED 2023-07-13 22:16:06,977 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566977"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566977"}]},"ts":"1689286566977"} 2023-07-13 22:16:06,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:06,977 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=5fdb2f1d78b0e82bfa5ce8615607ccc0, regionState=CLOSED 2023-07-13 22:16:06,977 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689286566977"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566977"}]},"ts":"1689286566977"} 2023-07-13 22:16:06,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:06,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af. 2023-07-13 22:16:06,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 87ec01817911902b3a40573b3c1438af: 2023-07-13 22:16:06,980 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=148 2023-07-13 22:16:06,980 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=148, state=SUCCESS; CloseRegionProcedure fc5a5d34a0ad212081d1ccf4f884a4f1, server=jenkins-hbase4.apache.org,39325,1689286540864 in 171 msec 2023-07-13 22:16:06,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=147 2023-07-13 22:16:06,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=147, state=SUCCESS; CloseRegionProcedure 5fdb2f1d78b0e82bfa5ce8615607ccc0, server=jenkins-hbase4.apache.org,43571,1689286544760 in 170 msec 2023-07-13 22:16:06,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:06,982 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fc5a5d34a0ad212081d1ccf4f884a4f1, UNASSIGN in 176 msec 2023-07-13 22:16:06,982 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=87ec01817911902b3a40573b3c1438af, regionState=CLOSED 2023-07-13 22:16:06,982 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689286566982"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286566982"}]},"ts":"1689286566982"} 2023-07-13 22:16:06,982 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fdb2f1d78b0e82bfa5ce8615607ccc0, UNASSIGN in 177 msec 2023-07-13 22:16:06,984 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=144 2023-07-13 22:16:06,984 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=144, state=SUCCESS; CloseRegionProcedure 87ec01817911902b3a40573b3c1438af, server=jenkins-hbase4.apache.org,39325,1689286540864 in 175 msec 2023-07-13 22:16:06,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-13 22:16:06,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=87ec01817911902b3a40573b3c1438af, UNASSIGN in 181 msec 2023-07-13 22:16:06,986 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286566986"}]},"ts":"1689286566986"} 2023-07-13 22:16:06,987 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-13 22:16:06,990 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-13 22:16:06,992 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 195 msec 2023-07-13 22:16:07,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-13 22:16:07,102 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-13 22:16:07,102 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:07,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:07,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-13 22:16:07,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1207611293, current retry=0 2023-07-13 22:16:07,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1207611293. 2023-07-13 22:16:07,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:07,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-13 22:16:07,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:07,115 INFO [Listener at localhost/39613] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-13 22:16:07,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-13 22:16:07,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:07,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:59834 deadline: 1689286627115, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-13 22:16:07,117 DEBUG [Listener at localhost/39613] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-13 22:16:07,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-13 22:16:07,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:07,120 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:07,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1207611293' 2023-07-13 22:16:07,121 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:07,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:07,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:07,128 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:07,128 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:07,128 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:07,128 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:07,128 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:07,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-13 22:16:07,131 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/recovered.edits] 2023-07-13 22:16:07,131 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/recovered.edits] 2023-07-13 22:16:07,132 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/recovered.edits] 2023-07-13 22:16:07,132 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/recovered.edits] 2023-07-13 22:16:07,132 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/f, FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/recovered.edits] 2023-07-13 22:16:07,140 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0/recovered.edits/4.seqid 2023-07-13 22:16:07,141 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1/recovered.edits/4.seqid 2023-07-13 22:16:07,141 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/5fdb2f1d78b0e82bfa5ce8615607ccc0 2023-07-13 22:16:07,142 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/fc5a5d34a0ad212081d1ccf4f884a4f1 2023-07-13 22:16:07,142 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f/recovered.edits/4.seqid 2023-07-13 22:16:07,143 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6/recovered.edits/4.seqid 2023-07-13 22:16:07,143 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/recovered.edits/4.seqid to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/archive/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af/recovered.edits/4.seqid 2023-07-13 22:16:07,143 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/c66670b1b34bf1183d02595bb500937f 2023-07-13 22:16:07,144 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/3763c6e9f845ad3a15a3fa4a06fc6cd6 2023-07-13 22:16:07,144 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/.tmp/data/default/Group_testDisabledTableMove/87ec01817911902b3a40573b3c1438af 2023-07-13 22:16:07,144 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-13 22:16:07,147 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:07,149 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-13 22:16:07,153 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-13 22:16:07,154 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:07,154 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-13 22:16:07,155 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286567154"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:07,155 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286567154"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:07,155 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286567154"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:07,155 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286567154"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:07,155 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286567154"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:07,156 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 22:16:07,156 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 87ec01817911902b3a40573b3c1438af, NAME => 'Group_testDisabledTableMove,,1689286566172.87ec01817911902b3a40573b3c1438af.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => c66670b1b34bf1183d02595bb500937f, NAME => 'Group_testDisabledTableMove,aaaaa,1689286566172.c66670b1b34bf1183d02595bb500937f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 3763c6e9f845ad3a15a3fa4a06fc6cd6, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689286566172.3763c6e9f845ad3a15a3fa4a06fc6cd6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 5fdb2f1d78b0e82bfa5ce8615607ccc0, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689286566172.5fdb2f1d78b0e82bfa5ce8615607ccc0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => fc5a5d34a0ad212081d1ccf4f884a4f1, NAME => 'Group_testDisabledTableMove,zzzzz,1689286566172.fc5a5d34a0ad212081d1ccf4f884a4f1.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 22:16:07,156 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-13 22:16:07,156 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286567156"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:07,158 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-13 22:16:07,160 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 22:16:07,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 43 msec 2023-07-13 22:16:07,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-13 22:16:07,232 INFO [Listener at localhost/39613] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-13 22:16:07,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:07,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:07,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:07,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:38543] to rsgroup default 2023-07-13 22:16:07,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:07,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:07,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1207611293, current retry=0 2023-07-13 22:16:07,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38543,1689286541242, jenkins-hbase4.apache.org,39109,1689286541053] are moved back to Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1207611293 => default 2023-07-13 22:16:07,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:07,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1207611293 2023-07-13 22:16:07,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:07,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:16:07,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:07,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:07,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:07,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:07,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:07,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:07,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:07,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:07,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:07,255 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:07,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:07,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:07,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:07,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:07,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:07,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:07,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287767264, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:07,264 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:07,266 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:07,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,267 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:07,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:07,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:07,285 INFO [Listener at localhost/39613] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515 (was 512) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1235144863_17 at /127.0.0.1:41256 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2414dac3-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1563578774_17 at /127.0.0.1:38208 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x122baacf-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=799 (was 770) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 381), ProcessCount=172 (was 172), AvailableMemoryMB=3788 (was 3785) - AvailableMemoryMB LEAK? - 2023-07-13 22:16:07,286 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 22:16:07,301 INFO [Listener at localhost/39613] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=515, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=172, AvailableMemoryMB=3787 2023-07-13 22:16:07,301 WARN [Listener at localhost/39613] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 22:16:07,301 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-13 22:16:07,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:07,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:07,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:07,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:07,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:07,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:07,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:07,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:07,315 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:07,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:07,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:07,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:07,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:07,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:07,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34777] to rsgroup master 2023-07-13 22:16:07,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:07,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59834 deadline: 1689287767328, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. 2023-07-13 22:16:07,329 WARN [Listener at localhost/39613] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34777 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:07,330 INFO [Listener at localhost/39613] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:07,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:07,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:07,331 INFO [Listener at localhost/39613] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38543, jenkins-hbase4.apache.org:39109, jenkins-hbase4.apache.org:39325, jenkins-hbase4.apache.org:43571], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:07,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:07,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:07,332 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 22:16:07,332 INFO [Listener at localhost/39613] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 22:16:07,333 DEBUG [Listener at localhost/39613] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x427575b1 to 127.0.0.1:54493 2023-07-13 22:16:07,333 DEBUG [Listener at localhost/39613] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,334 DEBUG [Listener at localhost/39613] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 22:16:07,334 DEBUG [Listener at localhost/39613] util.JVMClusterUtil(257): Found active master hash=1367865489, stopped=false 2023-07-13 22:16:07,334 DEBUG [Listener at localhost/39613] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 22:16:07,334 DEBUG [Listener at localhost/39613] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 22:16:07,335 INFO [Listener at localhost/39613] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:16:07,337 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:07,337 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:07,337 INFO [Listener at localhost/39613] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 22:16:07,337 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:07,337 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:07,337 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:07,337 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:07,338 DEBUG [Listener at localhost/39613] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c694acc to 127.0.0.1:54493 2023-07-13 22:16:07,339 DEBUG [Listener at localhost/39613] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,339 INFO [Listener at localhost/39613] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39325,1689286540864' ***** 2023-07-13 22:16:07,339 INFO [Listener at localhost/39613] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:07,339 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:07,340 INFO [Listener at localhost/39613] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39109,1689286541053' ***** 2023-07-13 22:16:07,340 INFO [Listener at localhost/39613] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:07,340 INFO [Listener at localhost/39613] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38543,1689286541242' ***** 2023-07-13 22:16:07,340 INFO [Listener at localhost/39613] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:07,340 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:07,340 INFO [Listener at localhost/39613] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43571,1689286544760' ***** 2023-07-13 22:16:07,340 INFO [Listener at localhost/39613] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:07,340 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:07,341 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:07,361 INFO [RS:0;jenkins-hbase4:39325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@11ebf93c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:07,361 INFO [RS:3;jenkins-hbase4:43571] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6436c29b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:07,361 INFO [RS:2;jenkins-hbase4:38543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@13a27dea{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:07,361 INFO [RS:1;jenkins-hbase4:39109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3757a44c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:07,366 INFO [RS:0;jenkins-hbase4:39325] server.AbstractConnector(383): Stopped ServerConnector@5bac19f3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:07,366 INFO [RS:3;jenkins-hbase4:43571] server.AbstractConnector(383): Stopped ServerConnector@b9fe3db{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:07,366 INFO [RS:1;jenkins-hbase4:39109] server.AbstractConnector(383): Stopped ServerConnector@23f51881{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:07,366 INFO [RS:3;jenkins-hbase4:43571] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:07,366 INFO [RS:2;jenkins-hbase4:38543] server.AbstractConnector(383): Stopped ServerConnector@5f367c60{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:07,366 INFO [RS:1;jenkins-hbase4:39109] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:07,366 INFO [RS:0;jenkins-hbase4:39325] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:07,367 INFO [RS:3;jenkins-hbase4:43571] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46fccd5b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:07,366 INFO [RS:2;jenkins-hbase4:38543] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:07,368 INFO [RS:0;jenkins-hbase4:39325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2aa51e1f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:07,369 INFO [RS:2;jenkins-hbase4:38543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@48aea874{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:07,368 INFO [RS:1;jenkins-hbase4:39109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5eb994c2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:07,370 INFO [RS:2;jenkins-hbase4:38543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39b957cc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:07,371 INFO [RS:1;jenkins-hbase4:39109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68e3f8ea{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:07,368 INFO [RS:3;jenkins-hbase4:43571] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@65cc72bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:07,370 INFO [RS:0;jenkins-hbase4:39325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@327aeb51{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:07,375 INFO [RS:1;jenkins-hbase4:39109] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:07,375 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:07,375 INFO [RS:1;jenkins-hbase4:39109] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:07,375 INFO [RS:1;jenkins-hbase4:39109] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:07,375 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:07,376 INFO [RS:3;jenkins-hbase4:43571] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:07,376 INFO [RS:2;jenkins-hbase4:38543] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:07,376 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:07,376 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:07,376 INFO [RS:3;jenkins-hbase4:43571] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:07,376 INFO [RS:3;jenkins-hbase4:43571] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:07,376 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(3305): Received CLOSE for b4a18ea8d84755db0befaf862f1698a9 2023-07-13 22:16:07,375 INFO [RS:0;jenkins-hbase4:39325] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:07,376 INFO [RS:2;jenkins-hbase4:38543] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:07,376 INFO [RS:0;jenkins-hbase4:39325] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:07,376 DEBUG [RS:1;jenkins-hbase4:39109] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e632b08 to 127.0.0.1:54493 2023-07-13 22:16:07,376 INFO [RS:0;jenkins-hbase4:39325] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:07,376 INFO [RS:2;jenkins-hbase4:38543] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:07,377 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(3305): Received CLOSE for c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:16:07,376 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:07,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4a18ea8d84755db0befaf862f1698a9, disabling compactions & flushes 2023-07-13 22:16:07,377 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(3305): Received CLOSE for 6412afb39fb47dcabd5281695612c837 2023-07-13 22:16:07,377 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:16:07,377 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(3305): Received CLOSE for 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:07,376 DEBUG [RS:1;jenkins-hbase4:39109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1f5d6fab0cf581ad20cfb9da5c269389, disabling compactions & flushes 2023-07-13 22:16:07,378 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39109,1689286541053; all regions closed. 2023-07-13 22:16:07,377 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:07,377 DEBUG [RS:2;jenkins-hbase4:38543] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x626c4d03 to 127.0.0.1:54493 2023-07-13 22:16:07,378 DEBUG [RS:0;jenkins-hbase4:39325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1426688c to 127.0.0.1:54493 2023-07-13 22:16:07,378 DEBUG [RS:0;jenkins-hbase4:39325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,378 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 22:16:07,378 DEBUG [RS:2;jenkins-hbase4:38543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,378 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38543,1689286541242; all regions closed. 2023-07-13 22:16:07,377 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:07,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:16:07,379 DEBUG [RS:3;jenkins-hbase4:43571] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6f7c8fc8 to 127.0.0.1:54493 2023-07-13 22:16:07,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:16:07,378 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1478): Online Regions={1f5d6fab0cf581ad20cfb9da5c269389=testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389.} 2023-07-13 22:16:07,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:07,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. after waiting 0 ms 2023-07-13 22:16:07,379 DEBUG [RS:3;jenkins-hbase4:43571] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:16:07,380 INFO [RS:3;jenkins-hbase4:43571] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:07,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:07,380 INFO [RS:3;jenkins-hbase4:43571] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:07,380 INFO [RS:3;jenkins-hbase4:43571] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:07,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. after waiting 0 ms 2023-07-13 22:16:07,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:07,380 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 22:16:07,380 DEBUG [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1504): Waiting on 1f5d6fab0cf581ad20cfb9da5c269389 2023-07-13 22:16:07,380 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-13 22:16:07,380 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1478): Online Regions={b4a18ea8d84755db0befaf862f1698a9=hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9., c215608c4a51d4b80df51dd910f81bab=hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab., 1588230740=hbase:meta,,1.1588230740, 6412afb39fb47dcabd5281695612c837=unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837.} 2023-07-13 22:16:07,380 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:16:07,381 DEBUG [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1504): Waiting on 1588230740, 6412afb39fb47dcabd5281695612c837, b4a18ea8d84755db0befaf862f1698a9, c215608c4a51d4b80df51dd910f81bab 2023-07-13 22:16:07,381 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:16:07,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:16:07,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:16:07,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:16:07,381 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.46 KB heapSize=61.09 KB 2023-07-13 22:16:07,395 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,396 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,398 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,398 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/testRename/1f5d6fab0cf581ad20cfb9da5c269389/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 22:16:07,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/namespace/b4a18ea8d84755db0befaf862f1698a9/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-13 22:16:07,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:07,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1f5d6fab0cf581ad20cfb9da5c269389: 2023-07-13 22:16:07,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689286560459.1f5d6fab0cf581ad20cfb9da5c269389. 2023-07-13 22:16:07,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:16:07,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4a18ea8d84755db0befaf862f1698a9: 2023-07-13 22:16:07,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689286543715.b4a18ea8d84755db0befaf862f1698a9. 2023-07-13 22:16:07,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c215608c4a51d4b80df51dd910f81bab, disabling compactions & flushes 2023-07-13 22:16:07,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:16:07,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:16:07,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. after waiting 0 ms 2023-07-13 22:16:07,403 DEBUG [RS:2;jenkins-hbase4:38543] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs 2023-07-13 22:16:07,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:16:07,403 INFO [RS:2;jenkins-hbase4:38543] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38543%2C1689286541242:(num 1689286543348) 2023-07-13 22:16:07,403 DEBUG [RS:2;jenkins-hbase4:38543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c215608c4a51d4b80df51dd910f81bab 1/1 column families, dataSize=22.12 KB heapSize=36.49 KB 2023-07-13 22:16:07,403 INFO [RS:2;jenkins-hbase4:38543] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,404 INFO [RS:2;jenkins-hbase4:38543] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:07,404 INFO [RS:2;jenkins-hbase4:38543] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:07,404 INFO [RS:2;jenkins-hbase4:38543] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:07,404 INFO [RS:2;jenkins-hbase4:38543] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:07,404 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:07,405 DEBUG [RS:1;jenkins-hbase4:39109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs 2023-07-13 22:16:07,406 INFO [RS:1;jenkins-hbase4:39109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39109%2C1689286541053:(num 1689286543347) 2023-07-13 22:16:07,406 DEBUG [RS:1;jenkins-hbase4:39109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,406 INFO [RS:1;jenkins-hbase4:39109] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,406 INFO [RS:2;jenkins-hbase4:38543] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38543 2023-07-13 22:16:07,415 INFO [RS:1;jenkins-hbase4:39109] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:07,416 INFO [RS:1;jenkins-hbase4:39109] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:07,416 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:07,416 INFO [RS:1;jenkins-hbase4:39109] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:07,416 INFO [RS:1;jenkins-hbase4:39109] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:07,417 INFO [RS:1;jenkins-hbase4:39109] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39109 2023-07-13 22:16:07,428 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.12 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/.tmp/m/0feb0894c76a4edcbc716be1161aad74 2023-07-13 22:16:07,429 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.54 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/info/a84ae9a9682d45968d2bfbf3d6bf76e3 2023-07-13 22:16:07,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a84ae9a9682d45968d2bfbf3d6bf76e3 2023-07-13 22:16:07,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0feb0894c76a4edcbc716be1161aad74 2023-07-13 22:16:07,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/.tmp/m/0feb0894c76a4edcbc716be1161aad74 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m/0feb0894c76a4edcbc716be1161aad74 2023-07-13 22:16:07,438 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:16:07,438 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:07,438 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:07,439 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:07,439 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38543,1689286541242 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:07,439 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39109,1689286541053 2023-07-13 22:16:07,440 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:07,440 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:07,440 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:07,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0feb0894c76a4edcbc716be1161aad74 2023-07-13 22:16:07,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/m/0feb0894c76a4edcbc716be1161aad74, entries=22, sequenceid=101, filesize=5.9 K 2023-07-13 22:16:07,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.12 KB/22653, heapSize ~36.48 KB/37352, currentSize=0 B/0 for c215608c4a51d4b80df51dd910f81bab in 45ms, sequenceid=101, compaction requested=false 2023-07-13 22:16:07,458 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/rep_barrier/7a6be7eab3ce40afa3a5d94212675c2d 2023-07-13 22:16:07,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/rsgroup/c215608c4a51d4b80df51dd910f81bab/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-13 22:16:07,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:07,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:16:07,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c215608c4a51d4b80df51dd910f81bab: 2023-07-13 22:16:07,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689286544007.c215608c4a51d4b80df51dd910f81bab. 2023-07-13 22:16:07,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6412afb39fb47dcabd5281695612c837, disabling compactions & flushes 2023-07-13 22:16:07,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:07,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:07,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. after waiting 0 ms 2023-07-13 22:16:07,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:07,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/default/unmovedTable/6412afb39fb47dcabd5281695612c837/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 22:16:07,466 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a6be7eab3ce40afa3a5d94212675c2d 2023-07-13 22:16:07,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:07,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6412afb39fb47dcabd5281695612c837: 2023-07-13 22:16:07,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689286562135.6412afb39fb47dcabd5281695612c837. 2023-07-13 22:16:07,476 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/table/f9f318d1396944aea09d6270747e83f1 2023-07-13 22:16:07,482 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f9f318d1396944aea09d6270747e83f1 2023-07-13 22:16:07,482 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/info/a84ae9a9682d45968d2bfbf3d6bf76e3 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info/a84ae9a9682d45968d2bfbf3d6bf76e3 2023-07-13 22:16:07,487 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a84ae9a9682d45968d2bfbf3d6bf76e3 2023-07-13 22:16:07,488 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/info/a84ae9a9682d45968d2bfbf3d6bf76e3, entries=62, sequenceid=210, filesize=11.8 K 2023-07-13 22:16:07,488 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/rep_barrier/7a6be7eab3ce40afa3a5d94212675c2d as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier/7a6be7eab3ce40afa3a5d94212675c2d 2023-07-13 22:16:07,494 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a6be7eab3ce40afa3a5d94212675c2d 2023-07-13 22:16:07,494 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/rep_barrier/7a6be7eab3ce40afa3a5d94212675c2d, entries=8, sequenceid=210, filesize=5.8 K 2023-07-13 22:16:07,495 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/.tmp/table/f9f318d1396944aea09d6270747e83f1 as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table/f9f318d1396944aea09d6270747e83f1 2023-07-13 22:16:07,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f9f318d1396944aea09d6270747e83f1 2023-07-13 22:16:07,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/table/f9f318d1396944aea09d6270747e83f1, entries=16, sequenceid=210, filesize=6.0 K 2023-07-13 22:16:07,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.46 KB/38356, heapSize ~61.05 KB/62512, currentSize=0 B/0 for 1588230740 in 121ms, sequenceid=210, compaction requested=false 2023-07-13 22:16:07,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=98 2023-07-13 22:16:07,519 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:07,520 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:07,520 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:16:07,520 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:07,580 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39325,1689286540864; all regions closed. 2023-07-13 22:16:07,581 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43571,1689286544760; all regions closed. 2023-07-13 22:16:07,591 DEBUG [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs 2023-07-13 22:16:07,591 INFO [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43571%2C1689286544760.meta:.meta(num 1689286551837) 2023-07-13 22:16:07,591 DEBUG [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs 2023-07-13 22:16:07,591 INFO [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39325%2C1689286540864.meta:.meta(num 1689286543505) 2023-07-13 22:16:07,599 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/WALs/jenkins-hbase4.apache.org,39325,1689286540864/jenkins-hbase4.apache.org%2C39325%2C1689286540864.1689286543304 not finished, retry = 0 2023-07-13 22:16:07,601 DEBUG [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs 2023-07-13 22:16:07,601 INFO [RS:3;jenkins-hbase4:43571] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43571%2C1689286544760:(num 1689286545243) 2023-07-13 22:16:07,601 DEBUG [RS:3;jenkins-hbase4:43571] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,601 INFO [RS:3;jenkins-hbase4:43571] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,601 INFO [RS:3;jenkins-hbase4:43571] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:07,601 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:07,602 INFO [RS:3;jenkins-hbase4:43571] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43571 2023-07-13 22:16:07,702 DEBUG [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/oldWALs 2023-07-13 22:16:07,702 INFO [RS:0;jenkins-hbase4:39325] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39325%2C1689286540864:(num 1689286543304) 2023-07-13 22:16:07,702 DEBUG [RS:0;jenkins-hbase4:39325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,702 INFO [RS:0;jenkins-hbase4:39325] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:07,702 INFO [RS:0;jenkins-hbase4:39325] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:07,702 INFO [RS:0;jenkins-hbase4:39325] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:07,702 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:07,702 INFO [RS:0;jenkins-hbase4:39325] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:07,702 INFO [RS:0;jenkins-hbase4:39325] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:07,704 INFO [RS:0;jenkins-hbase4:39325] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39325 2023-07-13 22:16:07,741 INFO [RS:2;jenkins-hbase4:38543] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38543,1689286541242; zookeeper connection closed. 2023-07-13 22:16:07,741 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:07,741 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:38543-0x10160c1767c0003, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:07,741 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@26f463bd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@26f463bd 2023-07-13 22:16:07,742 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:07,742 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43571,1689286544760 2023-07-13 22:16:07,742 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:07,742 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39325,1689286540864] 2023-07-13 22:16:07,742 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39325,1689286540864 2023-07-13 22:16:07,743 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39325,1689286540864; numProcessing=1 2023-07-13 22:16:07,746 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39325,1689286540864 already deleted, retry=false 2023-07-13 22:16:07,746 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39325,1689286540864 expired; onlineServers=3 2023-07-13 22:16:07,746 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39109,1689286541053] 2023-07-13 22:16:07,746 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39109,1689286541053; numProcessing=2 2023-07-13 22:16:07,747 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39109,1689286541053 already deleted, retry=false 2023-07-13 22:16:07,747 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39109,1689286541053 expired; onlineServers=2 2023-07-13 22:16:07,747 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43571,1689286544760] 2023-07-13 22:16:07,747 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43571,1689286544760; numProcessing=3 2023-07-13 22:16:07,748 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43571,1689286544760 already deleted, retry=false 2023-07-13 22:16:07,748 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43571,1689286544760 expired; onlineServers=1 2023-07-13 22:16:07,748 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38543,1689286541242] 2023-07-13 22:16:07,748 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38543,1689286541242; numProcessing=4 2023-07-13 22:16:07,751 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38543,1689286541242 already deleted, retry=false 2023-07-13 22:16:07,751 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38543,1689286541242 expired; onlineServers=0 2023-07-13 22:16:07,751 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34777,1689286538976' ***** 2023-07-13 22:16:07,751 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 22:16:07,752 DEBUG [M:0;jenkins-hbase4:34777] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10baf2aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:07,752 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:07,754 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:07,755 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:07,755 INFO [M:0;jenkins-hbase4:34777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4ab75c3a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 22:16:07,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:07,755 INFO [M:0;jenkins-hbase4:34777] server.AbstractConnector(383): Stopped ServerConnector@695b889{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:07,755 INFO [M:0;jenkins-hbase4:34777] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:07,756 INFO [M:0;jenkins-hbase4:34777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30ab4443{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:07,756 INFO [M:0;jenkins-hbase4:34777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41cde975{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:07,757 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34777,1689286538976 2023-07-13 22:16:07,757 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34777,1689286538976; all regions closed. 2023-07-13 22:16:07,757 DEBUG [M:0;jenkins-hbase4:34777] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:07,757 INFO [M:0;jenkins-hbase4:34777] master.HMaster(1491): Stopping master jetty server 2023-07-13 22:16:07,758 INFO [M:0;jenkins-hbase4:34777] server.AbstractConnector(383): Stopped ServerConnector@7375106d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:07,758 DEBUG [M:0;jenkins-hbase4:34777] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 22:16:07,758 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 22:16:07,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286542888] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286542888,5,FailOnTimeoutGroup] 2023-07-13 22:16:07,758 DEBUG [M:0;jenkins-hbase4:34777] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 22:16:07,759 INFO [M:0;jenkins-hbase4:34777] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 22:16:07,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286542889] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286542889,5,FailOnTimeoutGroup] 2023-07-13 22:16:07,759 INFO [M:0;jenkins-hbase4:34777] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 22:16:07,759 INFO [M:0;jenkins-hbase4:34777] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-13 22:16:07,759 DEBUG [M:0;jenkins-hbase4:34777] master.HMaster(1512): Stopping service threads 2023-07-13 22:16:07,759 INFO [M:0;jenkins-hbase4:34777] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 22:16:07,759 ERROR [M:0;jenkins-hbase4:34777] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-13 22:16:07,760 INFO [M:0;jenkins-hbase4:34777] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 22:16:07,760 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 22:16:07,761 DEBUG [M:0;jenkins-hbase4:34777] zookeeper.ZKUtil(398): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 22:16:07,761 WARN [M:0;jenkins-hbase4:34777] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 22:16:07,761 INFO [M:0;jenkins-hbase4:34777] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 22:16:07,761 INFO [M:0;jenkins-hbase4:34777] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 22:16:07,761 DEBUG [M:0;jenkins-hbase4:34777] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 22:16:07,761 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:07,761 DEBUG [M:0;jenkins-hbase4:34777] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:07,761 DEBUG [M:0;jenkins-hbase4:34777] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 22:16:07,761 DEBUG [M:0;jenkins-hbase4:34777] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:07,761 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.22 KB heapSize=621.39 KB 2023-07-13 22:16:07,782 INFO [M:0;jenkins-hbase4:34777] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.22 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9e07958c54d34ea89627bf5aea2d4afa 2023-07-13 22:16:07,789 DEBUG [M:0;jenkins-hbase4:34777] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9e07958c54d34ea89627bf5aea2d4afa as hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9e07958c54d34ea89627bf5aea2d4afa 2023-07-13 22:16:07,794 INFO [M:0;jenkins-hbase4:34777] regionserver.HStore(1080): Added hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9e07958c54d34ea89627bf5aea2d4afa, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-13 22:16:07,795 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegion(2948): Finished flush of dataSize ~519.22 KB/531680, heapSize ~621.38 KB/636288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 34ms, sequenceid=1152, compaction requested=false 2023-07-13 22:16:07,801 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:07,801 DEBUG [M:0;jenkins-hbase4:34777] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:07,805 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:07,805 INFO [M:0;jenkins-hbase4:34777] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 22:16:07,806 INFO [M:0;jenkins-hbase4:34777] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34777 2023-07-13 22:16:07,808 DEBUG [M:0;jenkins-hbase4:34777] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34777,1689286538976 already deleted, retry=false 2023-07-13 22:16:07,841 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:07,841 INFO [RS:1;jenkins-hbase4:39109] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39109,1689286541053; zookeeper connection closed. 2023-07-13 22:16:07,841 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39109-0x10160c1767c0002, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:07,841 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c20fb81] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c20fb81 2023-07-13 22:16:07,941 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:07,941 INFO [M:0;jenkins-hbase4:34777] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34777,1689286538976; zookeeper connection closed. 2023-07-13 22:16:07,941 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): master:34777-0x10160c1767c0000, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:08,041 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:08,041 INFO [RS:0;jenkins-hbase4:39325] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39325,1689286540864; zookeeper connection closed. 2023-07-13 22:16:08,041 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:39325-0x10160c1767c0001, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:08,042 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@70e2c3b6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@70e2c3b6 2023-07-13 22:16:08,142 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:08,142 INFO [RS:3;jenkins-hbase4:43571] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43571,1689286544760; zookeeper connection closed. 2023-07-13 22:16:08,142 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): regionserver:43571-0x10160c1767c000b, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:08,142 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@c775c52] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@c775c52 2023-07-13 22:16:08,142 INFO [Listener at localhost/39613] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-13 22:16:08,143 WARN [Listener at localhost/39613] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:08,147 INFO [Listener at localhost/39613] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:08,251 WARN [BP-1576339184-172.31.14.131-1689286535402 heartbeating to localhost/127.0.0.1:42191] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:08,251 WARN [BP-1576339184-172.31.14.131-1689286535402 heartbeating to localhost/127.0.0.1:42191] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1576339184-172.31.14.131-1689286535402 (Datanode Uuid 082a6246-a47b-43dc-8198-7d1fbd11fc69) service to localhost/127.0.0.1:42191 2023-07-13 22:16:08,252 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data5/current/BP-1576339184-172.31.14.131-1689286535402] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:08,253 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data6/current/BP-1576339184-172.31.14.131-1689286535402] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:08,254 WARN [Listener at localhost/39613] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:08,266 INFO [Listener at localhost/39613] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:08,370 WARN [BP-1576339184-172.31.14.131-1689286535402 heartbeating to localhost/127.0.0.1:42191] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:08,370 WARN [BP-1576339184-172.31.14.131-1689286535402 heartbeating to localhost/127.0.0.1:42191] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1576339184-172.31.14.131-1689286535402 (Datanode Uuid 74606711-abfc-42f5-81ca-22688d879c43) service to localhost/127.0.0.1:42191 2023-07-13 22:16:08,370 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data3/current/BP-1576339184-172.31.14.131-1689286535402] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:08,371 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data4/current/BP-1576339184-172.31.14.131-1689286535402] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:08,372 WARN [Listener at localhost/39613] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:08,384 INFO [Listener at localhost/39613] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:08,386 WARN [BP-1576339184-172.31.14.131-1689286535402 heartbeating to localhost/127.0.0.1:42191] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:08,386 WARN [BP-1576339184-172.31.14.131-1689286535402 heartbeating to localhost/127.0.0.1:42191] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1576339184-172.31.14.131-1689286535402 (Datanode Uuid 03f9678b-7844-44a6-b9df-b4e10718e2b5) service to localhost/127.0.0.1:42191 2023-07-13 22:16:08,387 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data1/current/BP-1576339184-172.31.14.131-1689286535402] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:08,389 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/cluster_671d45af-a364-9deb-5b27-c789bd092bc3/dfs/data/data2/current/BP-1576339184-172.31.14.131-1689286535402] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:08,417 INFO [Listener at localhost/39613] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:08,538 INFO [Listener at localhost/39613] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 22:16:08,596 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-13 22:16:08,596 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 22:16:08,596 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.log.dir so I do NOT create it in target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/95f28ad8-b84b-c8a6-da72-79f5d9196fad/hadoop.tmp.dir so I do NOT create it in target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7, deleteOnExit=true 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/test.cache.data in system properties and HBase conf 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir in system properties and HBase conf 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 22:16:08,597 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 22:16:08,598 DEBUG [Listener at localhost/39613] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 22:16:08,598 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 22:16:08,598 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 22:16:08,598 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 22:16:08,598 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 22:16:08,598 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/nfs.dump.dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 22:16:08,599 INFO [Listener at localhost/39613] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 22:16:08,605 WARN [Listener at localhost/39613] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 22:16:08,605 WARN [Listener at localhost/39613] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 22:16:08,635 DEBUG [Listener at localhost/39613-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10160c1767c000a, quorum=127.0.0.1:54493, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 22:16:08,635 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10160c1767c000a, quorum=127.0.0.1:54493, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 22:16:08,652 WARN [Listener at localhost/39613] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:08,655 INFO [Listener at localhost/39613] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:08,659 INFO [Listener at localhost/39613] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/Jetty_localhost_43931_hdfs____rz4obm/webapp 2023-07-13 22:16:08,754 INFO [Listener at localhost/39613] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43931 2023-07-13 22:16:08,758 WARN [Listener at localhost/39613] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 22:16:08,759 WARN [Listener at localhost/39613] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 22:16:08,803 WARN [Listener at localhost/44513] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:08,818 WARN [Listener at localhost/44513] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:16:08,820 WARN [Listener at localhost/44513] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:08,821 INFO [Listener at localhost/44513] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:08,825 INFO [Listener at localhost/44513] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/Jetty_localhost_42521_datanode____sb3sv1/webapp 2023-07-13 22:16:08,918 INFO [Listener at localhost/44513] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42521 2023-07-13 22:16:08,925 WARN [Listener at localhost/33287] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:08,950 WARN [Listener at localhost/33287] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:16:08,953 WARN [Listener at localhost/33287] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:08,954 INFO [Listener at localhost/33287] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:08,960 INFO [Listener at localhost/33287] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/Jetty_localhost_43469_datanode____.6are2p/webapp 2023-07-13 22:16:09,078 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf44611f7a7cbbed5: Processing first storage report for DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f from datanode 0647ff0b-7458-41f2-b0eb-39553a485c1a 2023-07-13 22:16:09,078 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf44611f7a7cbbed5: from storage DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f node DatanodeRegistration(127.0.0.1:35051, datanodeUuid=0647ff0b-7458-41f2-b0eb-39553a485c1a, infoPort=36753, infoSecurePort=0, ipcPort=33287, storageInfo=lv=-57;cid=testClusterID;nsid=99401900;c=1689286568608), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 22:16:09,079 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf44611f7a7cbbed5: Processing first storage report for DS-3cd8e5aa-8c07-42da-81fd-0aa66d60c5f3 from datanode 0647ff0b-7458-41f2-b0eb-39553a485c1a 2023-07-13 22:16:09,079 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf44611f7a7cbbed5: from storage DS-3cd8e5aa-8c07-42da-81fd-0aa66d60c5f3 node DatanodeRegistration(127.0.0.1:35051, datanodeUuid=0647ff0b-7458-41f2-b0eb-39553a485c1a, infoPort=36753, infoSecurePort=0, ipcPort=33287, storageInfo=lv=-57;cid=testClusterID;nsid=99401900;c=1689286568608), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:09,086 INFO [Listener at localhost/33287] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43469 2023-07-13 22:16:09,098 WARN [Listener at localhost/36769] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:09,134 WARN [Listener at localhost/36769] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:16:09,138 WARN [Listener at localhost/36769] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:09,140 INFO [Listener at localhost/36769] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:09,158 INFO [Listener at localhost/36769] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/Jetty_localhost_45707_datanode____.ukvsjg/webapp 2023-07-13 22:16:09,193 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:09,193 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 22:16:09,194 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 22:16:09,273 INFO [Listener at localhost/36769] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45707 2023-07-13 22:16:09,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf65bc132aa6cb084: Processing first storage report for DS-e3945a73-aee6-441a-bf82-67e5af08a714 from datanode 402c5051-9113-47d5-9d13-c2ecccbcdf63 2023-07-13 22:16:09,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf65bc132aa6cb084: from storage DS-e3945a73-aee6-441a-bf82-67e5af08a714 node DatanodeRegistration(127.0.0.1:40535, datanodeUuid=402c5051-9113-47d5-9d13-c2ecccbcdf63, infoPort=36879, infoSecurePort=0, ipcPort=36769, storageInfo=lv=-57;cid=testClusterID;nsid=99401900;c=1689286568608), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:09,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf65bc132aa6cb084: Processing first storage report for DS-fa8a87c6-edf2-458b-a262-356b98b8cbea from datanode 402c5051-9113-47d5-9d13-c2ecccbcdf63 2023-07-13 22:16:09,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf65bc132aa6cb084: from storage DS-fa8a87c6-edf2-458b-a262-356b98b8cbea node DatanodeRegistration(127.0.0.1:40535, datanodeUuid=402c5051-9113-47d5-9d13-c2ecccbcdf63, infoPort=36879, infoSecurePort=0, ipcPort=36769, storageInfo=lv=-57;cid=testClusterID;nsid=99401900;c=1689286568608), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 22:16:09,286 WARN [Listener at localhost/33829] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:09,538 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xadead44d33145164: Processing first storage report for DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f from datanode 0f27d696-7191-4d6e-866b-9cec117d49d5 2023-07-13 22:16:09,538 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xadead44d33145164: from storage DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f node DatanodeRegistration(127.0.0.1:33227, datanodeUuid=0f27d696-7191-4d6e-866b-9cec117d49d5, infoPort=34295, infoSecurePort=0, ipcPort=33829, storageInfo=lv=-57;cid=testClusterID;nsid=99401900;c=1689286568608), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:09,542 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xadead44d33145164: Processing first storage report for DS-ae29fd95-c8c2-4f8a-afd4-8c8ebca92d53 from datanode 0f27d696-7191-4d6e-866b-9cec117d49d5 2023-07-13 22:16:09,542 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xadead44d33145164: from storage DS-ae29fd95-c8c2-4f8a-afd4-8c8ebca92d53 node DatanodeRegistration(127.0.0.1:33227, datanodeUuid=0f27d696-7191-4d6e-866b-9cec117d49d5, infoPort=34295, infoSecurePort=0, ipcPort=33829, storageInfo=lv=-57;cid=testClusterID;nsid=99401900;c=1689286568608), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:09,606 DEBUG [Listener at localhost/33829] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a 2023-07-13 22:16:09,608 INFO [Listener at localhost/33829] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/zookeeper_0, clientPort=50537, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 22:16:09,610 INFO [Listener at localhost/33829] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50537 2023-07-13 22:16:09,610 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:09,612 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:09,631 INFO [Listener at localhost/33829] util.FSUtils(471): Created version file at hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced with version=8 2023-07-13 22:16:09,631 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/hbase-staging 2023-07-13 22:16:09,633 DEBUG [Listener at localhost/33829] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 22:16:09,633 DEBUG [Listener at localhost/33829] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 22:16:09,633 DEBUG [Listener at localhost/33829] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 22:16:09,633 DEBUG [Listener at localhost/33829] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 22:16:09,634 INFO [Listener at localhost/33829] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:09,635 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:09,635 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:09,635 INFO [Listener at localhost/33829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:09,635 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:09,635 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:09,635 INFO [Listener at localhost/33829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:09,636 INFO [Listener at localhost/33829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43291 2023-07-13 22:16:09,637 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:09,638 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:09,640 INFO [Listener at localhost/33829] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43291 connecting to ZooKeeper ensemble=127.0.0.1:50537 2023-07-13 22:16:09,649 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:432910x0, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:09,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43291-0x10160c1f18b0000 connected 2023-07-13 22:16:09,662 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:09,663 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:09,663 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:09,664 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43291 2023-07-13 22:16:09,664 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43291 2023-07-13 22:16:09,665 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43291 2023-07-13 22:16:09,665 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43291 2023-07-13 22:16:09,665 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43291 2023-07-13 22:16:09,667 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:09,667 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:09,667 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:09,668 INFO [Listener at localhost/33829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 22:16:09,668 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:09,668 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:09,668 INFO [Listener at localhost/33829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:09,669 INFO [Listener at localhost/33829] http.HttpServer(1146): Jetty bound to port 34297 2023-07-13 22:16:09,669 INFO [Listener at localhost/33829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:09,670 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:09,671 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@415470d2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:09,671 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:09,672 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@13d1ac8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:09,796 INFO [Listener at localhost/33829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:09,797 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:09,797 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:09,797 INFO [Listener at localhost/33829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:16:09,799 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:09,800 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5d38f915{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/jetty-0_0_0_0-34297-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8180014867317521101/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 22:16:09,801 INFO [Listener at localhost/33829] server.AbstractConnector(333): Started ServerConnector@2d8a119b{HTTP/1.1, (http/1.1)}{0.0.0.0:34297} 2023-07-13 22:16:09,801 INFO [Listener at localhost/33829] server.Server(415): Started @36450ms 2023-07-13 22:16:09,801 INFO [Listener at localhost/33829] master.HMaster(444): hbase.rootdir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced, hbase.cluster.distributed=false 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:09,819 INFO [Listener at localhost/33829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:09,820 INFO [Listener at localhost/33829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44461 2023-07-13 22:16:09,821 INFO [Listener at localhost/33829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:09,824 DEBUG [Listener at localhost/33829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:09,824 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:09,825 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:09,827 INFO [Listener at localhost/33829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44461 connecting to ZooKeeper ensemble=127.0.0.1:50537 2023-07-13 22:16:09,830 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-13 22:16:09,831 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:444610x0, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:09,832 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:444610x0, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:09,833 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44461-0x10160c1f18b0001 connected 2023-07-13 22:16:09,834 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:09,835 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:09,841 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44461 2023-07-13 22:16:09,841 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44461 2023-07-13 22:16:09,842 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44461 2023-07-13 22:16:09,847 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44461 2023-07-13 22:16:09,849 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44461 2023-07-13 22:16:09,853 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:09,853 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:09,853 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:09,854 INFO [Listener at localhost/33829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:09,854 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:09,854 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:09,854 INFO [Listener at localhost/33829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:09,858 INFO [Listener at localhost/33829] http.HttpServer(1146): Jetty bound to port 35069 2023-07-13 22:16:09,858 INFO [Listener at localhost/33829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:09,870 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:09,870 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@79875f3c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:09,871 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:09,871 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35a42664{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:10,008 INFO [Listener at localhost/33829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:10,009 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:10,009 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:10,009 INFO [Listener at localhost/33829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:16:10,010 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,011 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@75f0de34{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/jetty-0_0_0_0-35069-hbase-server-2_4_18-SNAPSHOT_jar-_-any-220749767973042748/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:10,012 INFO [Listener at localhost/33829] server.AbstractConnector(333): Started ServerConnector@69b40dc3{HTTP/1.1, (http/1.1)}{0.0.0.0:35069} 2023-07-13 22:16:10,012 INFO [Listener at localhost/33829] server.Server(415): Started @36661ms 2023-07-13 22:16:10,026 INFO [Listener at localhost/33829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:10,026 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:10,026 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:10,027 INFO [Listener at localhost/33829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:10,027 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:10,027 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:10,027 INFO [Listener at localhost/33829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:10,029 INFO [Listener at localhost/33829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42899 2023-07-13 22:16:10,029 INFO [Listener at localhost/33829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:10,035 DEBUG [Listener at localhost/33829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:10,036 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:10,038 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:10,040 INFO [Listener at localhost/33829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42899 connecting to ZooKeeper ensemble=127.0.0.1:50537 2023-07-13 22:16:10,049 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:428990x0, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:10,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42899-0x10160c1f18b0002 connected 2023-07-13 22:16:10,061 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:10,062 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:10,063 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:10,069 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42899 2023-07-13 22:16:10,070 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42899 2023-07-13 22:16:10,072 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42899 2023-07-13 22:16:10,073 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42899 2023-07-13 22:16:10,073 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42899 2023-07-13 22:16:10,076 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:10,076 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:10,076 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:10,077 INFO [Listener at localhost/33829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:10,077 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:10,077 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:10,078 INFO [Listener at localhost/33829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:10,078 INFO [Listener at localhost/33829] http.HttpServer(1146): Jetty bound to port 42783 2023-07-13 22:16:10,078 INFO [Listener at localhost/33829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:10,083 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,083 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39385a05{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:10,084 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,084 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f993756{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:10,235 INFO [Listener at localhost/33829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:10,237 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:10,237 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:10,237 INFO [Listener at localhost/33829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:16:10,241 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,242 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6ccdb5d4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/jetty-0_0_0_0-42783-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7016081018083263711/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:10,243 INFO [Listener at localhost/33829] server.AbstractConnector(333): Started ServerConnector@7fb867a7{HTTP/1.1, (http/1.1)}{0.0.0.0:42783} 2023-07-13 22:16:10,243 INFO [Listener at localhost/33829] server.Server(415): Started @36892ms 2023-07-13 22:16:10,272 INFO [Listener at localhost/33829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:10,273 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:10,274 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:10,274 INFO [Listener at localhost/33829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:10,274 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:10,274 INFO [Listener at localhost/33829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:10,275 INFO [Listener at localhost/33829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:10,276 INFO [Listener at localhost/33829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46209 2023-07-13 22:16:10,276 INFO [Listener at localhost/33829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:10,282 DEBUG [Listener at localhost/33829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:10,283 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:10,284 INFO [Listener at localhost/33829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:10,288 INFO [Listener at localhost/33829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46209 connecting to ZooKeeper ensemble=127.0.0.1:50537 2023-07-13 22:16:10,302 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:462090x0, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:10,304 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46209-0x10160c1f18b0003 connected 2023-07-13 22:16:10,304 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:10,304 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:10,306 DEBUG [Listener at localhost/33829] zookeeper.ZKUtil(164): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:10,310 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46209 2023-07-13 22:16:10,313 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46209 2023-07-13 22:16:10,313 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46209 2023-07-13 22:16:10,317 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46209 2023-07-13 22:16:10,318 DEBUG [Listener at localhost/33829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46209 2023-07-13 22:16:10,321 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:10,321 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:10,321 INFO [Listener at localhost/33829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:10,322 INFO [Listener at localhost/33829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:10,322 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:10,322 INFO [Listener at localhost/33829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:10,323 INFO [Listener at localhost/33829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:10,323 INFO [Listener at localhost/33829] http.HttpServer(1146): Jetty bound to port 34339 2023-07-13 22:16:10,324 INFO [Listener at localhost/33829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:10,331 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,331 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fd62719{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:10,331 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,332 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@c2353b1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:10,473 INFO [Listener at localhost/33829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:10,474 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:10,475 INFO [Listener at localhost/33829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:10,475 INFO [Listener at localhost/33829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:16:10,476 INFO [Listener at localhost/33829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:10,477 INFO [Listener at localhost/33829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@aa32419{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/java.io.tmpdir/jetty-0_0_0_0-34339-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6005529942002929323/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:10,479 INFO [Listener at localhost/33829] server.AbstractConnector(333): Started ServerConnector@77983366{HTTP/1.1, (http/1.1)}{0.0.0.0:34339} 2023-07-13 22:16:10,479 INFO [Listener at localhost/33829] server.Server(415): Started @37128ms 2023-07-13 22:16:10,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:10,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@33be496{HTTP/1.1, (http/1.1)}{0.0.0.0:36229} 2023-07-13 22:16:10,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37134ms 2023-07-13 22:16:10,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,488 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 22:16:10,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,490 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:10,490 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:10,490 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:10,490 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:10,491 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:10,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:16:10,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43291,1689286569634 from backup master directory 2023-07-13 22:16:10,493 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:16:10,494 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,494 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:10,494 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 22:16:10,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/hbase.id with ID: bb70508f-96b5-44bb-b4e4-aacd88c0c72e 2023-07-13 22:16:10,524 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:10,527 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:10,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x20bfc310 to 127.0.0.1:50537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:10,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4aba9d74, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:10,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:10,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 22:16:10,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:10,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store-tmp 2023-07-13 22:16:10,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:10,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 22:16:10,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:10,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:10,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 22:16:10,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:10,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:10,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:10,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/WALs/jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43291%2C1689286569634, suffix=, logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/WALs/jenkins-hbase4.apache.org,43291,1689286569634, archiveDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/oldWALs, maxLogs=10 2023-07-13 22:16:10,580 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK] 2023-07-13 22:16:10,582 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK] 2023-07-13 22:16:10,583 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK] 2023-07-13 22:16:10,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/WALs/jenkins-hbase4.apache.org,43291,1689286569634/jenkins-hbase4.apache.org%2C43291%2C1689286569634.1689286570563 2023-07-13 22:16:10,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK], DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK], DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK]] 2023-07-13 22:16:10,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:10,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:10,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:10,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:10,590 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:10,592 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 22:16:10,592 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 22:16:10,593 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:10,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:10,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:10,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:10,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:10,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10325324960, jitterRate=-0.03837917745113373}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:10,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:10,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 22:16:10,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 22:16:10,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 22:16:10,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 22:16:10,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 22:16:10,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-13 22:16:10,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 22:16:10,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 22:16:10,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 22:16:10,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 22:16:10,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 22:16:10,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 22:16:10,612 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:10,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 22:16:10,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 22:16:10,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 22:16:10,615 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:10,615 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:10,615 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:10,615 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:10,616 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:10,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43291,1689286569634, sessionid=0x10160c1f18b0000, setting cluster-up flag (Was=false) 2023-07-13 22:16:10,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 22:16:10,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,629 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:10,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 22:16:10,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:10,635 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.hbase-snapshot/.tmp 2023-07-13 22:16:10,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 22:16:10,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 22:16:10,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 22:16:10,638 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:10,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 22:16:10,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-13 22:16:10,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 22:16:10,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 22:16:10,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 22:16:10,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 22:16:10,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 22:16:10,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:10,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:10,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:10,703 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(951): ClusterId : bb70508f-96b5-44bb-b4e4-aacd88c0c72e 2023-07-13 22:16:10,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:10,704 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:10,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 22:16:10,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:10,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,707 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(951): ClusterId : bb70508f-96b5-44bb-b4e4-aacd88c0c72e 2023-07-13 22:16:10,707 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(951): ClusterId : bb70508f-96b5-44bb-b4e4-aacd88c0c72e 2023-07-13 22:16:10,709 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:10,709 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689286600711 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 22:16:10,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 22:16:10,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,714 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:10,714 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:10,715 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:10,715 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:10,717 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:10,717 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:10,717 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:10,718 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:10,719 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:10,723 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 22:16:10,723 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 22:16:10,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 22:16:10,723 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ReadOnlyZKClient(139): Connect 0x5674c010 to 127.0.0.1:50537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:10,723 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ReadOnlyZKClient(139): Connect 0x744fd490 to 127.0.0.1:50537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:10,723 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ReadOnlyZKClient(139): Connect 0x244bccb4 to 127.0.0.1:50537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:10,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 22:16:10,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 22:16:10,724 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:10,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 22:16:10,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 22:16:10,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286570731,5,FailOnTimeoutGroup] 2023-07-13 22:16:10,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286570735,5,FailOnTimeoutGroup] 2023-07-13 22:16:10,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 22:16:10,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,748 DEBUG [RS:1;jenkins-hbase4:42899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a883469, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:10,748 DEBUG [RS:0;jenkins-hbase4:44461] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a8c364d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:10,748 DEBUG [RS:1;jenkins-hbase4:42899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bb53e52, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:10,748 DEBUG [RS:0;jenkins-hbase4:44461] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6bb13c76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:10,749 DEBUG [RS:2;jenkins-hbase4:46209] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58b334ae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:10,749 DEBUG [RS:2;jenkins-hbase4:46209] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c8cab60, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:10,760 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:10,760 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:10,761 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced 2023-07-13 22:16:10,761 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42899 2023-07-13 22:16:10,761 INFO [RS:1;jenkins-hbase4:42899] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:10,761 INFO [RS:1;jenkins-hbase4:42899] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:10,761 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:10,761 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44461 2023-07-13 22:16:10,761 INFO [RS:0;jenkins-hbase4:44461] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:10,761 INFO [RS:0;jenkins-hbase4:44461] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:10,761 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:10,762 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46209 2023-07-13 22:16:10,762 INFO [RS:2;jenkins-hbase4:46209] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:10,762 INFO [RS:2;jenkins-hbase4:46209] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:10,762 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:10,762 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43291,1689286569634 with isa=jenkins-hbase4.apache.org/172.31.14.131:42899, startcode=1689286570025 2023-07-13 22:16:10,762 DEBUG [RS:1;jenkins-hbase4:42899] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:10,762 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43291,1689286569634 with isa=jenkins-hbase4.apache.org/172.31.14.131:46209, startcode=1689286570265 2023-07-13 22:16:10,762 DEBUG [RS:2;jenkins-hbase4:46209] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:10,762 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43291,1689286569634 with isa=jenkins-hbase4.apache.org/172.31.14.131:44461, startcode=1689286569818 2023-07-13 22:16:10,763 DEBUG [RS:0;jenkins-hbase4:44461] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:10,767 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40099, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:10,767 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50511, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:10,767 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53013, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:10,769 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43291] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,769 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:10,769 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 22:16:10,769 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43291] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:10,770 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced 2023-07-13 22:16:10,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 22:16:10,770 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43291] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,770 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44513 2023-07-13 22:16:10,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:10,770 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced 2023-07-13 22:16:10,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 22:16:10,770 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34297 2023-07-13 22:16:10,770 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced 2023-07-13 22:16:10,770 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44513 2023-07-13 22:16:10,770 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44513 2023-07-13 22:16:10,770 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34297 2023-07-13 22:16:10,770 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34297 2023-07-13 22:16:10,772 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:10,778 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ZKUtil(162): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,779 WARN [RS:0;jenkins-hbase4:44461] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:10,779 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42899,1689286570025] 2023-07-13 22:16:10,779 INFO [RS:0;jenkins-hbase4:44461] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:10,779 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ZKUtil(162): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,779 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44461,1689286569818] 2023-07-13 22:16:10,779 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ZKUtil(162): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,779 WARN [RS:2;jenkins-hbase4:46209] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:10,779 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,779 INFO [RS:2;jenkins-hbase4:46209] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:10,779 WARN [RS:1;jenkins-hbase4:42899] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:10,779 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,779 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46209,1689286570265] 2023-07-13 22:16:10,779 INFO [RS:1;jenkins-hbase4:42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:10,780 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,791 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:10,795 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:16:10,795 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ZKUtil(162): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,796 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ZKUtil(162): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,796 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/info 2023-07-13 22:16:10,796 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ZKUtil(162): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,796 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ZKUtil(162): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,796 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ZKUtil(162): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,797 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ZKUtil(162): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,797 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:16:10,797 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ZKUtil(162): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,797 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ZKUtil(162): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,797 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ZKUtil(162): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,798 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:10,798 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:16:10,798 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:10,798 INFO [RS:1;jenkins-hbase4:42899] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:10,799 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:10,799 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:10,799 INFO [RS:0;jenkins-hbase4:44461] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:10,799 INFO [RS:2;jenkins-hbase4:46209] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:10,800 INFO [RS:1;jenkins-hbase4:42899] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:10,800 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:16:10,800 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:16:10,801 INFO [RS:1;jenkins-hbase4:42899] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:10,801 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,801 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:10,801 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:16:10,802 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/table 2023-07-13 22:16:10,803 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:16:10,803 INFO [RS:0;jenkins-hbase4:44461] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:10,803 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:10,805 INFO [RS:2;jenkins-hbase4:46209] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:10,806 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:10,807 INFO [RS:0;jenkins-hbase4:44461] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:10,807 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,808 INFO [RS:2;jenkins-hbase4:46209] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:10,808 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,809 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:10,809 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:10,809 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740 2023-07-13 22:16:10,811 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740 2023-07-13 22:16:10,813 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,813 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,813 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,814 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,814 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,814 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,814 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,814 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,814 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:0;jenkins-hbase4:44461] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:10,815 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,815 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,816 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:10,816 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,816 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,816 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,817 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,817 DEBUG [RS:1;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,817 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,816 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,817 DEBUG [RS:2;jenkins-hbase4:46209] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:10,817 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,817 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,817 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:16:10,819 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:16:10,821 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,822 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,822 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,822 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,822 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,822 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,823 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,823 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,823 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:10,824 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9837285600, jitterRate=-0.08383138477802277}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:16:10,824 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:16:10,824 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:16:10,824 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:16:10,824 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:16:10,824 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:16:10,824 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:16:10,824 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:10,824 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:16:10,825 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 22:16:10,825 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 22:16:10,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 22:16:10,826 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 22:16:10,835 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 22:16:10,835 INFO [RS:1;jenkins-hbase4:42899] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:10,835 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42899,1689286570025-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,836 INFO [RS:0;jenkins-hbase4:44461] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:10,836 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44461,1689286569818-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,841 INFO [RS:2;jenkins-hbase4:46209] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:10,841 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46209,1689286570265-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,854 INFO [RS:1;jenkins-hbase4:42899] regionserver.Replication(203): jenkins-hbase4.apache.org,42899,1689286570025 started 2023-07-13 22:16:10,854 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42899,1689286570025, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42899, sessionid=0x10160c1f18b0002 2023-07-13 22:16:10,854 INFO [RS:2;jenkins-hbase4:46209] regionserver.Replication(203): jenkins-hbase4.apache.org,46209,1689286570265 started 2023-07-13 22:16:10,854 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46209,1689286570265, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46209, sessionid=0x10160c1f18b0003 2023-07-13 22:16:10,854 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:10,854 DEBUG [RS:1;jenkins-hbase4:42899] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,854 DEBUG [RS:1;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42899,1689286570025' 2023-07-13 22:16:10,854 DEBUG [RS:1;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:10,854 INFO [RS:0;jenkins-hbase4:44461] regionserver.Replication(203): jenkins-hbase4.apache.org,44461,1689286569818 started 2023-07-13 22:16:10,854 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:10,854 DEBUG [RS:2;jenkins-hbase4:46209] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,854 DEBUG [RS:2;jenkins-hbase4:46209] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46209,1689286570265' 2023-07-13 22:16:10,854 DEBUG [RS:2;jenkins-hbase4:46209] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:10,854 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44461,1689286569818, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44461, sessionid=0x10160c1f18b0001 2023-07-13 22:16:10,855 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:10,855 DEBUG [RS:0;jenkins-hbase4:44461] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,855 DEBUG [RS:0;jenkins-hbase4:44461] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44461,1689286569818' 2023-07-13 22:16:10,855 DEBUG [RS:0;jenkins-hbase4:44461] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:10,855 DEBUG [RS:1;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:10,855 DEBUG [RS:2;jenkins-hbase4:46209] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:10,855 DEBUG [RS:0;jenkins-hbase4:44461] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:10,855 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:10,855 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:10,855 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:10,855 DEBUG [RS:2;jenkins-hbase4:46209] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:10,855 DEBUG [RS:2;jenkins-hbase4:46209] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46209,1689286570265' 2023-07-13 22:16:10,855 DEBUG [RS:2;jenkins-hbase4:46209] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:10,855 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:10,856 DEBUG [RS:1;jenkins-hbase4:42899] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:10,856 DEBUG [RS:1;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42899,1689286570025' 2023-07-13 22:16:10,856 DEBUG [RS:1;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:10,855 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:10,856 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:10,856 DEBUG [RS:0;jenkins-hbase4:44461] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:10,856 DEBUG [RS:0;jenkins-hbase4:44461] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44461,1689286569818' 2023-07-13 22:16:10,856 DEBUG [RS:2;jenkins-hbase4:46209] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:10,856 DEBUG [RS:0;jenkins-hbase4:44461] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:10,856 DEBUG [RS:1;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:10,856 DEBUG [RS:2;jenkins-hbase4:46209] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:10,856 INFO [RS:2;jenkins-hbase4:46209] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 22:16:10,857 DEBUG [RS:0;jenkins-hbase4:44461] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:10,857 DEBUG [RS:1;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:10,857 INFO [RS:1;jenkins-hbase4:42899] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 22:16:10,857 DEBUG [RS:0;jenkins-hbase4:44461] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:10,858 INFO [RS:0;jenkins-hbase4:44461] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 22:16:10,859 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,859 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,859 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,860 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ZKUtil(398): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 22:16:10,860 INFO [RS:2;jenkins-hbase4:46209] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 22:16:10,860 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ZKUtil(398): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 22:16:10,860 INFO [RS:0;jenkins-hbase4:44461] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 22:16:10,860 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,860 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,860 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ZKUtil(398): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 22:16:10,861 INFO [RS:1;jenkins-hbase4:42899] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 22:16:10,861 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,861 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,861 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,861 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:10,965 INFO [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42899%2C1689286570025, suffix=, logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,42899,1689286570025, archiveDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs, maxLogs=32 2023-07-13 22:16:10,965 INFO [RS:0;jenkins-hbase4:44461] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44461%2C1689286569818, suffix=, logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,44461,1689286569818, archiveDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs, maxLogs=32 2023-07-13 22:16:10,965 INFO [RS:2;jenkins-hbase4:46209] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46209%2C1689286570265, suffix=, logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,46209,1689286570265, archiveDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs, maxLogs=32 2023-07-13 22:16:10,985 DEBUG [jenkins-hbase4:43291] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 22:16:10,986 DEBUG [jenkins-hbase4:43291] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:10,986 DEBUG [jenkins-hbase4:43291] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:10,986 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK] 2023-07-13 22:16:10,986 DEBUG [jenkins-hbase4:43291] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:10,987 DEBUG [jenkins-hbase4:43291] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:10,987 DEBUG [jenkins-hbase4:43291] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:10,987 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK] 2023-07-13 22:16:10,988 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42899,1689286570025, state=OPENING 2023-07-13 22:16:10,988 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK] 2023-07-13 22:16:10,988 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK] 2023-07-13 22:16:10,989 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK] 2023-07-13 22:16:10,990 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK] 2023-07-13 22:16:10,990 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 22:16:10,991 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:10,992 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:16:10,994 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42899,1689286570025}] 2023-07-13 22:16:10,996 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK] 2023-07-13 22:16:10,996 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK] 2023-07-13 22:16:10,996 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK] 2023-07-13 22:16:10,997 INFO [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,42899,1689286570025/jenkins-hbase4.apache.org%2C42899%2C1689286570025.1689286570967 2023-07-13 22:16:10,997 INFO [RS:0;jenkins-hbase4:44461] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,44461,1689286569818/jenkins-hbase4.apache.org%2C44461%2C1689286569818.1689286570971 2023-07-13 22:16:10,997 DEBUG [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK], DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK], DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK]] 2023-07-13 22:16:10,997 DEBUG [RS:0;jenkins-hbase4:44461] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK], DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK], DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK]] 2023-07-13 22:16:11,002 INFO [RS:2;jenkins-hbase4:46209] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,46209,1689286570265/jenkins-hbase4.apache.org%2C46209%2C1689286570265.1689286570971 2023-07-13 22:16:11,002 DEBUG [RS:2;jenkins-hbase4:46209] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK], DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK], DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK]] 2023-07-13 22:16:11,154 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:11,154 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:16:11,156 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47030, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:16:11,161 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 22:16:11,161 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:11,163 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42899%2C1689286570025.meta, suffix=.meta, logDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,42899,1689286570025, archiveDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs, maxLogs=32 2023-07-13 22:16:11,177 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK] 2023-07-13 22:16:11,178 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK] 2023-07-13 22:16:11,179 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK] 2023-07-13 22:16:11,182 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/WALs/jenkins-hbase4.apache.org,42899,1689286570025/jenkins-hbase4.apache.org%2C42899%2C1689286570025.meta.1689286571163.meta 2023-07-13 22:16:11,182 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35051,DS-d84a0f7e-9b80-4f97-865d-4e0bd32e970f,DISK], DatanodeInfoWithStorage[127.0.0.1:33227,DS-6a3c28af-f1fc-459c-8427-24b01bbb4d2f,DISK], DatanodeInfoWithStorage[127.0.0.1:40535,DS-e3945a73-aee6-441a-bf82-67e5af08a714,DISK]] 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 22:16:11,183 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 22:16:11,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 22:16:11,184 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:16:11,186 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/info 2023-07-13 22:16:11,186 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/info 2023-07-13 22:16:11,186 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:16:11,187 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:11,187 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:16:11,188 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:16:11,188 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:16:11,188 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:16:11,189 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:11,189 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:16:11,190 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/table 2023-07-13 22:16:11,190 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/table 2023-07-13 22:16:11,190 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:16:11,191 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:11,192 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740 2023-07-13 22:16:11,193 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740 2023-07-13 22:16:11,195 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:16:11,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:16:11,198 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12020075040, jitterRate=0.11945672333240509}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:16:11,198 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:16:11,198 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689286571154 2023-07-13 22:16:11,209 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 22:16:11,209 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 22:16:11,210 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42899,1689286570025, state=OPEN 2023-07-13 22:16:11,211 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 22:16:11,211 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:16:11,215 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 22:16:11,215 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42899,1689286570025 in 219 msec 2023-07-13 22:16:11,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 22:16:11,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 390 msec 2023-07-13 22:16:11,218 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 578 msec 2023-07-13 22:16:11,218 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689286571218, completionTime=-1 2023-07-13 22:16:11,218 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 22:16:11,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 22:16:11,222 DEBUG [hconnection-0x647dd4ce-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:11,223 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47032, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:11,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 22:16:11,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689286631225 2023-07-13 22:16:11,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689286691225 2023-07-13 22:16:11,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43291,1689286569634-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43291,1689286569634-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43291,1689286569634-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43291, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 22:16:11,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:11,235 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 22:16:11,236 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 22:16:11,237 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:11,237 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:11,239 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/namespace/d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,239 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/namespace/d091601b30757c29faf949911bc91c2c empty. 2023-07-13 22:16:11,240 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/namespace/d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,240 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 22:16:11,255 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:11,259 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 22:16:11,261 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:11,261 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:11,262 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => d091601b30757c29faf949911bc91c2c, NAME => 'hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp 2023-07-13 22:16:11,263 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:11,266 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,267 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494 empty. 2023-07-13 22:16:11,268 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,268 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 22:16:11,279 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,279 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing d091601b30757c29faf949911bc91c2c, disabling compactions & flushes 2023-07-13 22:16:11,279 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,279 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,279 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. after waiting 0 ms 2023-07-13 22:16:11,279 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,279 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,279 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for d091601b30757c29faf949911bc91c2c: 2023-07-13 22:16:11,282 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:11,283 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286571283"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286571283"}]},"ts":"1689286571283"} 2023-07-13 22:16:11,289 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:11,289 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:11,290 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286571290"}]},"ts":"1689286571290"} 2023-07-13 22:16:11,293 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:11,293 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 22:16:11,295 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => fefd2edd8540e873b08515c88643d494, NAME => 'hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp 2023-07-13 22:16:11,297 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:11,297 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:11,297 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:11,297 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:11,297 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:11,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d091601b30757c29faf949911bc91c2c, ASSIGN}] 2023-07-13 22:16:11,299 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d091601b30757c29faf949911bc91c2c, ASSIGN 2023-07-13 22:16:11,299 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=d091601b30757c29faf949911bc91c2c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46209,1689286570265; forceNewPlan=false, retain=false 2023-07-13 22:16:11,321 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,322 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing fefd2edd8540e873b08515c88643d494, disabling compactions & flushes 2023-07-13 22:16:11,322 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,322 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,322 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. after waiting 0 ms 2023-07-13 22:16:11,322 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,322 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,322 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for fefd2edd8540e873b08515c88643d494: 2023-07-13 22:16:11,325 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:11,325 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286571325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286571325"}]},"ts":"1689286571325"} 2023-07-13 22:16:11,327 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:11,327 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:11,328 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286571327"}]},"ts":"1689286571327"} 2023-07-13 22:16:11,329 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 22:16:11,333 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:11,333 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:11,333 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:11,333 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:11,333 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:11,333 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fefd2edd8540e873b08515c88643d494, ASSIGN}] 2023-07-13 22:16:11,335 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fefd2edd8540e873b08515c88643d494, ASSIGN 2023-07-13 22:16:11,336 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=fefd2edd8540e873b08515c88643d494, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42899,1689286570025; forceNewPlan=false, retain=false 2023-07-13 22:16:11,336 INFO [jenkins-hbase4:43291] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-13 22:16:11,338 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=d091601b30757c29faf949911bc91c2c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:11,338 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fefd2edd8540e873b08515c88643d494, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:11,338 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286571338"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286571338"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286571338"}]},"ts":"1689286571338"} 2023-07-13 22:16:11,338 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286571338"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286571338"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286571338"}]},"ts":"1689286571338"} 2023-07-13 22:16:11,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure d091601b30757c29faf949911bc91c2c, server=jenkins-hbase4.apache.org,46209,1689286570265}] 2023-07-13 22:16:11,340 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure fefd2edd8540e873b08515c88643d494, server=jenkins-hbase4.apache.org,42899,1689286570025}] 2023-07-13 22:16:11,493 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:11,493 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:16:11,495 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59386, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:16:11,499 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,499 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fefd2edd8540e873b08515c88643d494, NAME => 'hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:11,500 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. service=MultiRowMutationService 2023-07-13 22:16:11,500 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d091601b30757c29faf949911bc91c2c, NAME => 'hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,502 INFO [StoreOpener-fefd2edd8540e873b08515c88643d494-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,502 INFO [StoreOpener-d091601b30757c29faf949911bc91c2c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,503 DEBUG [StoreOpener-fefd2edd8540e873b08515c88643d494-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/m 2023-07-13 22:16:11,503 DEBUG [StoreOpener-fefd2edd8540e873b08515c88643d494-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/m 2023-07-13 22:16:11,504 INFO [StoreOpener-fefd2edd8540e873b08515c88643d494-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fefd2edd8540e873b08515c88643d494 columnFamilyName m 2023-07-13 22:16:11,504 DEBUG [StoreOpener-d091601b30757c29faf949911bc91c2c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/info 2023-07-13 22:16:11,504 DEBUG [StoreOpener-d091601b30757c29faf949911bc91c2c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/info 2023-07-13 22:16:11,504 INFO [StoreOpener-d091601b30757c29faf949911bc91c2c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d091601b30757c29faf949911bc91c2c columnFamilyName info 2023-07-13 22:16:11,504 INFO [StoreOpener-fefd2edd8540e873b08515c88643d494-1] regionserver.HStore(310): Store=fefd2edd8540e873b08515c88643d494/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:11,505 INFO [StoreOpener-d091601b30757c29faf949911bc91c2c-1] regionserver.HStore(310): Store=d091601b30757c29faf949911bc91c2c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:11,505 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:11,510 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:11,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:11,512 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fefd2edd8540e873b08515c88643d494; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@39b91a04, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:11,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fefd2edd8540e873b08515c88643d494: 2023-07-13 22:16:11,513 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494., pid=9, masterSystemTime=1689286571493 2023-07-13 22:16:11,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,516 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:11,516 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fefd2edd8540e873b08515c88643d494, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:11,517 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286571516"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286571516"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286571516"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286571516"}]},"ts":"1689286571516"} 2023-07-13 22:16:11,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:11,521 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d091601b30757c29faf949911bc91c2c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11448157280, jitterRate=0.06619273126125336}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:11,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d091601b30757c29faf949911bc91c2c: 2023-07-13 22:16:11,521 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-13 22:16:11,522 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure fefd2edd8540e873b08515c88643d494, server=jenkins-hbase4.apache.org,42899,1689286570025 in 179 msec 2023-07-13 22:16:11,522 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c., pid=8, masterSystemTime=1689286571492 2023-07-13 22:16:11,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,527 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:11,531 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-13 22:16:11,531 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=fefd2edd8540e873b08515c88643d494, ASSIGN in 189 msec 2023-07-13 22:16:11,531 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=d091601b30757c29faf949911bc91c2c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:11,531 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286571531"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286571531"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286571531"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286571531"}]},"ts":"1689286571531"} 2023-07-13 22:16:11,531 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:11,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286571531"}]},"ts":"1689286571531"} 2023-07-13 22:16:11,533 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 22:16:11,534 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-13 22:16:11,534 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure d091601b30757c29faf949911bc91c2c, server=jenkins-hbase4.apache.org,46209,1689286570265 in 193 msec 2023-07-13 22:16:11,536 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:11,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-13 22:16:11,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=d091601b30757c29faf949911bc91c2c, ASSIGN in 237 msec 2023-07-13 22:16:11,537 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 280 msec 2023-07-13 22:16:11,537 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:11,538 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286571537"}]},"ts":"1689286571537"} 2023-07-13 22:16:11,539 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 22:16:11,541 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:11,542 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 307 msec 2023-07-13 22:16:11,564 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 22:16:11,564 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 22:16:11,569 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:11,569 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:11,571 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:16:11,572 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43291,1689286569634] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 22:16:11,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 22:16:11,637 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:11,638 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:11,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:11,642 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:11,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 22:16:11,653 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:11,656 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-13 22:16:11,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 22:16:11,663 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:11,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 7 msec 2023-07-13 22:16:11,671 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 22:16:11,673 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 22:16:11,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.179sec 2023-07-13 22:16:11,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-13 22:16:11,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:11,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-13 22:16:11,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-13 22:16:11,676 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:11,677 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:11,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-13 22:16:11,678 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:11,678 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2 empty. 2023-07-13 22:16:11,679 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:11,679 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-13 22:16:11,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-13 22:16:11,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-13 22:16:11,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:11,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 22:16:11,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 22:16:11,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43291,1689286569634-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 22:16:11,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43291,1689286569634-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 22:16:11,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 22:16:11,693 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:11,694 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 68ca7965d8e9742664d66529da0a22d2, NAME => 'hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp 2023-07-13 22:16:11,702 DEBUG [Listener at localhost/33829] zookeeper.ReadOnlyZKClient(139): Connect 0x61d9c774 to 127.0.0.1:50537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:11,703 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,703 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 68ca7965d8e9742664d66529da0a22d2, disabling compactions & flushes 2023-07-13 22:16:11,703 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:11,703 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:11,707 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. after waiting 0 ms 2023-07-13 22:16:11,707 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:11,707 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:11,707 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 68ca7965d8e9742664d66529da0a22d2: 2023-07-13 22:16:11,709 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:11,710 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689286571710"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286571710"}]},"ts":"1689286571710"} 2023-07-13 22:16:11,711 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:11,712 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:11,712 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286571712"}]},"ts":"1689286571712"} 2023-07-13 22:16:11,713 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-13 22:16:11,720 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:11,720 DEBUG [Listener at localhost/33829] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66563a99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:11,720 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:11,720 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:11,720 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:11,720 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:11,721 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=68ca7965d8e9742664d66529da0a22d2, ASSIGN}] 2023-07-13 22:16:11,721 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=68ca7965d8e9742664d66529da0a22d2, ASSIGN 2023-07-13 22:16:11,722 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=68ca7965d8e9742664d66529da0a22d2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44461,1689286569818; forceNewPlan=false, retain=false 2023-07-13 22:16:11,722 DEBUG [hconnection-0x433db7b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:11,724 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47046, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:11,725 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:11,725 INFO [Listener at localhost/33829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:11,727 DEBUG [Listener at localhost/33829] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 22:16:11,729 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60908, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 22:16:11,733 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 22:16:11,733 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:11,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 22:16:11,734 DEBUG [Listener at localhost/33829] zookeeper.ReadOnlyZKClient(139): Connect 0x6fbb5489 to 127.0.0.1:50537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:11,741 DEBUG [Listener at localhost/33829] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7549a206, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:11,742 INFO [Listener at localhost/33829] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50537 2023-07-13 22:16:11,744 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:11,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10160c1f18b000a connected 2023-07-13 22:16:11,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-13 22:16:11,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-13 22:16:11,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-13 22:16:11,760 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:11,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-13 22:16:11,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-13 22:16:11,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:11,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-13 22:16:11,865 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:11,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-13 22:16:11,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:16:11,868 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:11,868 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:16:11,870 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:11,872 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:11,872 INFO [jenkins-hbase4:43291] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:11,874 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=68ca7965d8e9742664d66529da0a22d2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:11,874 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d empty. 2023-07-13 22:16:11,874 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689286571874"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286571874"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286571874"}]},"ts":"1689286571874"} 2023-07-13 22:16:11,875 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:11,875 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-13 22:16:11,875 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 68ca7965d8e9742664d66529da0a22d2, server=jenkins-hbase4.apache.org,44461,1689286569818}] 2023-07-13 22:16:11,893 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:11,895 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7658c5d7260cc45d780b9f225da9488d, NAME => 'np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp 2023-07-13 22:16:11,904 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:11,904 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 7658c5d7260cc45d780b9f225da9488d, disabling compactions & flushes 2023-07-13 22:16:11,904 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:11,904 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:11,904 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. after waiting 0 ms 2023-07-13 22:16:11,904 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:11,904 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:11,904 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 7658c5d7260cc45d780b9f225da9488d: 2023-07-13 22:16:11,906 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:11,907 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286571907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286571907"}]},"ts":"1689286571907"} 2023-07-13 22:16:11,909 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:11,909 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:11,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286571909"}]},"ts":"1689286571909"} 2023-07-13 22:16:11,910 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-13 22:16:11,914 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:11,914 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:11,915 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:11,915 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:11,915 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:11,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, ASSIGN}] 2023-07-13 22:16:11,916 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, ASSIGN 2023-07-13 22:16:11,916 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46209,1689286570265; forceNewPlan=false, retain=false 2023-07-13 22:16:11,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:16:12,028 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:12,028 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:16:12,029 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:16:12,033 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:12,034 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 68ca7965d8e9742664d66529da0a22d2, NAME => 'hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:12,034 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,034 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:12,034 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,034 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,035 INFO [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,036 DEBUG [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2/q 2023-07-13 22:16:12,036 DEBUG [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2/q 2023-07-13 22:16:12,037 INFO [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 68ca7965d8e9742664d66529da0a22d2 columnFamilyName q 2023-07-13 22:16:12,037 INFO [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] regionserver.HStore(310): Store=68ca7965d8e9742664d66529da0a22d2/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:12,037 INFO [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,038 DEBUG [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2/u 2023-07-13 22:16:12,038 DEBUG [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2/u 2023-07-13 22:16:12,039 INFO [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 68ca7965d8e9742664d66529da0a22d2 columnFamilyName u 2023-07-13 22:16:12,039 INFO [StoreOpener-68ca7965d8e9742664d66529da0a22d2-1] regionserver.HStore(310): Store=68ca7965d8e9742664d66529da0a22d2/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:12,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-13 22:16:12,042 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:12,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:12,045 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 68ca7965d8e9742664d66529da0a22d2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9714889760, jitterRate=-0.09523038566112518}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-13 22:16:12,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 68ca7965d8e9742664d66529da0a22d2: 2023-07-13 22:16:12,046 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2., pid=16, masterSystemTime=1689286572027 2023-07-13 22:16:12,049 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:12,049 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:12,049 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=68ca7965d8e9742664d66529da0a22d2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:12,050 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689286572049"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286572049"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286572049"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286572049"}]},"ts":"1689286572049"} 2023-07-13 22:16:12,052 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-13 22:16:12,052 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 68ca7965d8e9742664d66529da0a22d2, server=jenkins-hbase4.apache.org,44461,1689286569818 in 176 msec 2023-07-13 22:16:12,053 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 22:16:12,054 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=68ca7965d8e9742664d66529da0a22d2, ASSIGN in 332 msec 2023-07-13 22:16:12,054 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:12,054 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286572054"}]},"ts":"1689286572054"} 2023-07-13 22:16:12,055 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-13 22:16:12,057 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:12,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 383 msec 2023-07-13 22:16:12,067 INFO [jenkins-hbase4:43291] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:12,067 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=7658c5d7260cc45d780b9f225da9488d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:12,068 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286572067"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286572067"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286572067"}]},"ts":"1689286572067"} 2023-07-13 22:16:12,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 7658c5d7260cc45d780b9f225da9488d, server=jenkins-hbase4.apache.org,46209,1689286570265}] 2023-07-13 22:16:12,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:16:12,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7658c5d7260cc45d780b9f225da9488d, NAME => 'np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:12,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:12,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,225 INFO [StoreOpener-7658c5d7260cc45d780b9f225da9488d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,227 DEBUG [StoreOpener-7658c5d7260cc45d780b9f225da9488d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/fam1 2023-07-13 22:16:12,227 DEBUG [StoreOpener-7658c5d7260cc45d780b9f225da9488d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/fam1 2023-07-13 22:16:12,227 INFO [StoreOpener-7658c5d7260cc45d780b9f225da9488d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7658c5d7260cc45d780b9f225da9488d columnFamilyName fam1 2023-07-13 22:16:12,228 INFO [StoreOpener-7658c5d7260cc45d780b9f225da9488d-1] regionserver.HStore(310): Store=7658c5d7260cc45d780b9f225da9488d/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:12,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/np1/table1/7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/np1/table1/7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:12,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7658c5d7260cc45d780b9f225da9488d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11894466560, jitterRate=0.1077585220336914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:12,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7658c5d7260cc45d780b9f225da9488d: 2023-07-13 22:16:12,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d., pid=18, masterSystemTime=1689286572220 2023-07-13 22:16:12,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,236 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=7658c5d7260cc45d780b9f225da9488d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:12,236 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286572236"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286572236"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286572236"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286572236"}]},"ts":"1689286572236"} 2023-07-13 22:16:12,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 22:16:12,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 7658c5d7260cc45d780b9f225da9488d, server=jenkins-hbase4.apache.org,46209,1689286570265 in 168 msec 2023-07-13 22:16:12,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-13 22:16:12,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, ASSIGN in 323 msec 2023-07-13 22:16:12,240 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:12,241 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286572240"}]},"ts":"1689286572240"} 2023-07-13 22:16:12,242 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-13 22:16:12,244 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:12,245 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 384 msec 2023-07-13 22:16:12,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 22:16:12,471 INFO [Listener at localhost/33829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-13 22:16:12,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:12,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-13 22:16:12,475 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:12,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-13 22:16:12,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 22:16:12,493 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:12,495 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38700, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:12,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-13 22:16:12,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 22:16:12,582 INFO [Listener at localhost/33829] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-13 22:16:12,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:12,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:12,584 INFO [Listener at localhost/33829] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-13 22:16:12,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-13 22:16:12,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-13 22:16:12,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 22:16:12,589 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286572589"}]},"ts":"1689286572589"} 2023-07-13 22:16:12,590 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-13 22:16:12,591 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-13 22:16:12,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, UNASSIGN}] 2023-07-13 22:16:12,597 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, UNASSIGN 2023-07-13 22:16:12,598 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=7658c5d7260cc45d780b9f225da9488d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:12,598 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286572598"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286572598"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286572598"}]},"ts":"1689286572598"} 2023-07-13 22:16:12,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 7658c5d7260cc45d780b9f225da9488d, server=jenkins-hbase4.apache.org,46209,1689286570265}] 2023-07-13 22:16:12,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 22:16:12,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7658c5d7260cc45d780b9f225da9488d, disabling compactions & flushes 2023-07-13 22:16:12,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. after waiting 0 ms 2023-07-13 22:16:12,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:12,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d. 2023-07-13 22:16:12,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7658c5d7260cc45d780b9f225da9488d: 2023-07-13 22:16:12,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,759 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=7658c5d7260cc45d780b9f225da9488d, regionState=CLOSED 2023-07-13 22:16:12,759 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286572759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286572759"}]},"ts":"1689286572759"} 2023-07-13 22:16:12,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-13 22:16:12,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 7658c5d7260cc45d780b9f225da9488d, server=jenkins-hbase4.apache.org,46209,1689286570265 in 161 msec 2023-07-13 22:16:12,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-13 22:16:12,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=7658c5d7260cc45d780b9f225da9488d, UNASSIGN in 170 msec 2023-07-13 22:16:12,764 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286572763"}]},"ts":"1689286572763"} 2023-07-13 22:16:12,765 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-13 22:16:12,767 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-13 22:16:12,769 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 183 msec 2023-07-13 22:16:12,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 22:16:12,891 INFO [Listener at localhost/33829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-13 22:16:12,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-13 22:16:12,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-13 22:16:12,894 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 22:16:12,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-13 22:16:12,895 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 22:16:12,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:12,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:16:12,899 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,900 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/fam1, FileablePath, hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/recovered.edits] 2023-07-13 22:16:12,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 22:16:12,905 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/recovered.edits/4.seqid to hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/archive/data/np1/table1/7658c5d7260cc45d780b9f225da9488d/recovered.edits/4.seqid 2023-07-13 22:16:12,906 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/.tmp/data/np1/table1/7658c5d7260cc45d780b9f225da9488d 2023-07-13 22:16:12,906 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-13 22:16:12,908 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 22:16:12,909 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-13 22:16:12,911 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-13 22:16:12,912 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 22:16:12,912 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-13 22:16:12,912 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286572912"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:12,913 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 22:16:12,913 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7658c5d7260cc45d780b9f225da9488d, NAME => 'np1:table1,,1689286571860.7658c5d7260cc45d780b9f225da9488d.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 22:16:12,913 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-13 22:16:12,913 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286572913"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:12,914 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-13 22:16:12,918 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 22:16:12,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-13 22:16:13,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 22:16:13,002 INFO [Listener at localhost/33829] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-13 22:16:13,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-13 22:16:13,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-13 22:16:13,017 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 22:16:13,020 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 22:16:13,022 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 22:16:13,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 22:16:13,023 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-13 22:16:13,023 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:13,024 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 22:16:13,025 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 22:16:13,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-13 22:16:13,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43291] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 22:16:13,124 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 22:16:13,124 INFO [Listener at localhost/33829] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 22:16:13,124 DEBUG [Listener at localhost/33829] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x61d9c774 to 127.0.0.1:50537 2023-07-13 22:16:13,124 DEBUG [Listener at localhost/33829] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,124 DEBUG [Listener at localhost/33829] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 22:16:13,124 DEBUG [Listener at localhost/33829] util.JVMClusterUtil(257): Found active master hash=1748705190, stopped=false 2023-07-13 22:16:13,124 DEBUG [Listener at localhost/33829] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 22:16:13,125 DEBUG [Listener at localhost/33829] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 22:16:13,125 DEBUG [Listener at localhost/33829] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-13 22:16:13,125 INFO [Listener at localhost/33829] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:13,127 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:13,127 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:13,127 INFO [Listener at localhost/33829] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 22:16:13,127 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:13,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:13,127 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:13,129 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:13,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:13,129 DEBUG [Listener at localhost/33829] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x20bfc310 to 127.0.0.1:50537 2023-07-13 22:16:13,130 DEBUG [Listener at localhost/33829] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,130 INFO [Listener at localhost/33829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44461,1689286569818' ***** 2023-07-13 22:16:13,130 INFO [Listener at localhost/33829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:13,130 INFO [Listener at localhost/33829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42899,1689286570025' ***** 2023-07-13 22:16:13,130 INFO [Listener at localhost/33829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:13,130 INFO [Listener at localhost/33829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46209,1689286570265' ***** 2023-07-13 22:16:13,130 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:13,130 INFO [Listener at localhost/33829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:13,130 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:13,130 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:13,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:13,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:13,151 INFO [RS:1;jenkins-hbase4:42899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6ccdb5d4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:13,151 INFO [RS:2;jenkins-hbase4:46209] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@aa32419{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:13,156 INFO [RS:1;jenkins-hbase4:42899] server.AbstractConnector(383): Stopped ServerConnector@7fb867a7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:13,157 INFO [RS:0;jenkins-hbase4:44461] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@75f0de34{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:13,157 INFO [RS:1;jenkins-hbase4:42899] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:13,158 INFO [RS:1;jenkins-hbase4:42899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f993756{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:13,161 INFO [RS:1;jenkins-hbase4:42899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39385a05{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:13,161 INFO [RS:2;jenkins-hbase4:46209] server.AbstractConnector(383): Stopped ServerConnector@77983366{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:13,161 INFO [RS:0;jenkins-hbase4:44461] server.AbstractConnector(383): Stopped ServerConnector@69b40dc3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:13,161 INFO [RS:2;jenkins-hbase4:46209] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:13,161 INFO [RS:0;jenkins-hbase4:44461] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:13,161 INFO [RS:1;jenkins-hbase4:42899] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:13,161 INFO [RS:2;jenkins-hbase4:46209] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@c2353b1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:13,161 INFO [RS:0;jenkins-hbase4:44461] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35a42664{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:13,162 INFO [RS:2;jenkins-hbase4:46209] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fd62719{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:13,161 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:13,161 INFO [RS:1;jenkins-hbase4:42899] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:13,162 INFO [RS:1;jenkins-hbase4:42899] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:13,162 INFO [RS:0;jenkins-hbase4:44461] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@79875f3c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:13,162 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(3305): Received CLOSE for fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:13,163 INFO [RS:2;jenkins-hbase4:46209] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:13,163 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:13,163 DEBUG [RS:1;jenkins-hbase4:42899] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x744fd490 to 127.0.0.1:50537 2023-07-13 22:16:13,163 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:13,163 DEBUG [RS:1;jenkins-hbase4:42899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fefd2edd8540e873b08515c88643d494, disabling compactions & flushes 2023-07-13 22:16:13,163 INFO [RS:2;jenkins-hbase4:46209] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:13,164 INFO [RS:2;jenkins-hbase4:46209] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:13,164 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(3305): Received CLOSE for d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:13,164 INFO [RS:1;jenkins-hbase4:42899] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:13,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:13,164 INFO [RS:0;jenkins-hbase4:44461] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:13,164 INFO [RS:1;jenkins-hbase4:42899] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:13,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:13,165 INFO [RS:1;jenkins-hbase4:42899] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:13,164 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:13,165 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 22:16:13,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. after waiting 0 ms 2023-07-13 22:16:13,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d091601b30757c29faf949911bc91c2c, disabling compactions & flushes 2023-07-13 22:16:13,164 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:13,165 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-13 22:16:13,164 INFO [RS:0;jenkins-hbase4:44461] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:13,168 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:16:13,166 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, fefd2edd8540e873b08515c88643d494=hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494.} 2023-07-13 22:16:13,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:13,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:13,165 DEBUG [RS:2;jenkins-hbase4:46209] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5674c010 to 127.0.0.1:50537 2023-07-13 22:16:13,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:13,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. after waiting 0 ms 2023-07-13 22:16:13,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:13,168 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d091601b30757c29faf949911bc91c2c 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-13 22:16:13,168 DEBUG [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1504): Waiting on 1588230740, fefd2edd8540e873b08515c88643d494 2023-07-13 22:16:13,168 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:16:13,168 INFO [RS:0;jenkins-hbase4:44461] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:13,168 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:16:13,169 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(3305): Received CLOSE for 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:13,168 DEBUG [RS:2;jenkins-hbase4:46209] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,169 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 22:16:13,169 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1478): Online Regions={d091601b30757c29faf949911bc91c2c=hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c.} 2023-07-13 22:16:13,168 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fefd2edd8540e873b08515c88643d494 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-13 22:16:13,169 DEBUG [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1504): Waiting on d091601b30757c29faf949911bc91c2c 2023-07-13 22:16:13,169 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:16:13,169 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:16:13,169 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:13,169 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-13 22:16:13,169 DEBUG [RS:0;jenkins-hbase4:44461] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x244bccb4 to 127.0.0.1:50537 2023-07-13 22:16:13,169 DEBUG [RS:0;jenkins-hbase4:44461] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,170 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 22:16:13,170 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1478): Online Regions={68ca7965d8e9742664d66529da0a22d2=hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2.} 2023-07-13 22:16:13,170 DEBUG [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1504): Waiting on 68ca7965d8e9742664d66529da0a22d2 2023-07-13 22:16:13,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 68ca7965d8e9742664d66529da0a22d2, disabling compactions & flushes 2023-07-13 22:16:13,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:13,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:13,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. after waiting 0 ms 2023-07-13 22:16:13,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:13,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/quota/68ca7965d8e9742664d66529da0a22d2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:13,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:13,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 68ca7965d8e9742664d66529da0a22d2: 2023-07-13 22:16:13,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689286571674.68ca7965d8e9742664d66529da0a22d2. 2023-07-13 22:16:13,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/.tmp/info/4f26a4d2ef824a4392de1058bd31f224 2023-07-13 22:16:13,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f26a4d2ef824a4392de1058bd31f224 2023-07-13 22:16:13,195 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/.tmp/info/77213931652f4648813620936a86561e 2023-07-13 22:16:13,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/.tmp/m/66d67457dede4e8b999b96d70b9f4fd1 2023-07-13 22:16:13,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/.tmp/info/4f26a4d2ef824a4392de1058bd31f224 as hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/info/4f26a4d2ef824a4392de1058bd31f224 2023-07-13 22:16:13,205 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 77213931652f4648813620936a86561e 2023-07-13 22:16:13,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f26a4d2ef824a4392de1058bd31f224 2023-07-13 22:16:13,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/info/4f26a4d2ef824a4392de1058bd31f224, entries=3, sequenceid=8, filesize=5.0 K 2023-07-13 22:16:13,207 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/.tmp/m/66d67457dede4e8b999b96d70b9f4fd1 as hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/m/66d67457dede4e8b999b96d70b9f4fd1 2023-07-13 22:16:13,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for d091601b30757c29faf949911bc91c2c in 41ms, sequenceid=8, compaction requested=false 2023-07-13 22:16:13,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 22:16:13,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/m/66d67457dede4e8b999b96d70b9f4fd1, entries=1, sequenceid=7, filesize=4.9 K 2023-07-13 22:16:13,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for fefd2edd8540e873b08515c88643d494 in 53ms, sequenceid=7, compaction requested=false 2023-07-13 22:16:13,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 22:16:13,222 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:13,226 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:13,226 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:13,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/namespace/d091601b30757c29faf949911bc91c2c/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-13 22:16:13,234 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:13,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d091601b30757c29faf949911bc91c2c: 2023-07-13 22:16:13,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689286571234.d091601b30757c29faf949911bc91c2c. 2023-07-13 22:16:13,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/rsgroup/fefd2edd8540e873b08515c88643d494/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-13 22:16:13,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:13,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:13,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fefd2edd8540e873b08515c88643d494: 2023-07-13 22:16:13,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689286571255.fefd2edd8540e873b08515c88643d494. 2023-07-13 22:16:13,245 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/.tmp/rep_barrier/b4b41b8f92e44bb4b5c8f17440755c45 2023-07-13 22:16:13,252 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b4b41b8f92e44bb4b5c8f17440755c45 2023-07-13 22:16:13,265 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/.tmp/table/f5b02cc9e7a0441891c36bd1d1c14fb1 2023-07-13 22:16:13,272 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f5b02cc9e7a0441891c36bd1d1c14fb1 2023-07-13 22:16:13,272 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/.tmp/info/77213931652f4648813620936a86561e as hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/info/77213931652f4648813620936a86561e 2023-07-13 22:16:13,278 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 77213931652f4648813620936a86561e 2023-07-13 22:16:13,278 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/info/77213931652f4648813620936a86561e, entries=32, sequenceid=31, filesize=8.5 K 2023-07-13 22:16:13,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/.tmp/rep_barrier/b4b41b8f92e44bb4b5c8f17440755c45 as hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/rep_barrier/b4b41b8f92e44bb4b5c8f17440755c45 2023-07-13 22:16:13,284 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b4b41b8f92e44bb4b5c8f17440755c45 2023-07-13 22:16:13,284 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/rep_barrier/b4b41b8f92e44bb4b5c8f17440755c45, entries=1, sequenceid=31, filesize=4.9 K 2023-07-13 22:16:13,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/.tmp/table/f5b02cc9e7a0441891c36bd1d1c14fb1 as hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/table/f5b02cc9e7a0441891c36bd1d1c14fb1 2023-07-13 22:16:13,290 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f5b02cc9e7a0441891c36bd1d1c14fb1 2023-07-13 22:16:13,290 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/table/f5b02cc9e7a0441891c36bd1d1c14fb1, entries=8, sequenceid=31, filesize=5.2 K 2023-07-13 22:16:13,291 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 121ms, sequenceid=31, compaction requested=false 2023-07-13 22:16:13,291 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 22:16:13,301 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-13 22:16:13,302 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:13,302 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:13,302 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:16:13,302 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:13,369 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42899,1689286570025; all regions closed. 2023-07-13 22:16:13,369 DEBUG [RS:1;jenkins-hbase4:42899] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 22:16:13,369 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46209,1689286570265; all regions closed. 2023-07-13 22:16:13,369 DEBUG [RS:2;jenkins-hbase4:46209] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 22:16:13,370 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44461,1689286569818; all regions closed. 2023-07-13 22:16:13,371 DEBUG [RS:0;jenkins-hbase4:44461] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 22:16:13,380 DEBUG [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs 2023-07-13 22:16:13,380 INFO [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42899%2C1689286570025.meta:.meta(num 1689286571163) 2023-07-13 22:16:13,385 DEBUG [RS:2;jenkins-hbase4:46209] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs 2023-07-13 22:16:13,385 INFO [RS:2;jenkins-hbase4:46209] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46209%2C1689286570265:(num 1689286570971) 2023-07-13 22:16:13,385 DEBUG [RS:2;jenkins-hbase4:46209] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,385 INFO [RS:2;jenkins-hbase4:46209] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:13,386 INFO [RS:2;jenkins-hbase4:46209] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:13,386 DEBUG [RS:0;jenkins-hbase4:44461] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs 2023-07-13 22:16:13,386 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:13,386 INFO [RS:0;jenkins-hbase4:44461] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44461%2C1689286569818:(num 1689286570971) 2023-07-13 22:16:13,386 DEBUG [RS:0;jenkins-hbase4:44461] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,386 INFO [RS:2;jenkins-hbase4:46209] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:13,386 INFO [RS:0;jenkins-hbase4:44461] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:13,386 DEBUG [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/oldWALs 2023-07-13 22:16:13,386 INFO [RS:2;jenkins-hbase4:46209] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:13,386 INFO [RS:2;jenkins-hbase4:46209] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:13,386 INFO [RS:0;jenkins-hbase4:44461] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:13,387 INFO [RS:0;jenkins-hbase4:44461] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:13,387 INFO [RS:0;jenkins-hbase4:44461] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:13,387 INFO [RS:0;jenkins-hbase4:44461] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:13,386 INFO [RS:1;jenkins-hbase4:42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42899%2C1689286570025:(num 1689286570967) 2023-07-13 22:16:13,387 DEBUG [RS:1;jenkins-hbase4:42899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,388 INFO [RS:1;jenkins-hbase4:42899] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:13,387 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:13,387 INFO [RS:2;jenkins-hbase4:46209] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46209 2023-07-13 22:16:13,390 INFO [RS:0;jenkins-hbase4:44461] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44461 2023-07-13 22:16:13,390 INFO [RS:1;jenkins-hbase4:42899] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:13,390 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:13,392 INFO [RS:1;jenkins-hbase4:42899] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42899 2023-07-13 22:16:13,392 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:13,392 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:13,392 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:13,392 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:13,392 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:13,395 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46209,1689286570265 2023-07-13 22:16:13,395 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:13,395 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:13,395 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:13,395 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44461,1689286569818 2023-07-13 22:16:13,396 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:13,396 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:13,396 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42899,1689286570025] 2023-07-13 22:16:13,396 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689286570025 2023-07-13 22:16:13,396 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42899,1689286570025; numProcessing=1 2023-07-13 22:16:13,400 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42899,1689286570025 already deleted, retry=false 2023-07-13 22:16:13,400 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42899,1689286570025 expired; onlineServers=2 2023-07-13 22:16:13,400 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44461,1689286569818] 2023-07-13 22:16:13,400 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44461,1689286569818; numProcessing=2 2023-07-13 22:16:13,401 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44461,1689286569818 already deleted, retry=false 2023-07-13 22:16:13,401 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44461,1689286569818 expired; onlineServers=1 2023-07-13 22:16:13,401 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46209,1689286570265] 2023-07-13 22:16:13,401 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46209,1689286570265; numProcessing=3 2023-07-13 22:16:13,403 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46209,1689286570265 already deleted, retry=false 2023-07-13 22:16:13,403 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46209,1689286570265 expired; onlineServers=0 2023-07-13 22:16:13,403 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43291,1689286569634' ***** 2023-07-13 22:16:13,403 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 22:16:13,403 DEBUG [M:0;jenkins-hbase4:43291] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c049679, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:13,403 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:13,405 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:13,405 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:13,406 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:13,406 INFO [M:0;jenkins-hbase4:43291] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5d38f915{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 22:16:13,406 INFO [M:0;jenkins-hbase4:43291] server.AbstractConnector(383): Stopped ServerConnector@2d8a119b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:13,407 INFO [M:0;jenkins-hbase4:43291] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:13,407 INFO [M:0;jenkins-hbase4:43291] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@13d1ac8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:13,407 INFO [M:0;jenkins-hbase4:43291] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@415470d2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:13,407 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43291,1689286569634 2023-07-13 22:16:13,407 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43291,1689286569634; all regions closed. 2023-07-13 22:16:13,407 DEBUG [M:0;jenkins-hbase4:43291] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:13,407 INFO [M:0;jenkins-hbase4:43291] master.HMaster(1491): Stopping master jetty server 2023-07-13 22:16:13,408 INFO [M:0;jenkins-hbase4:43291] server.AbstractConnector(383): Stopped ServerConnector@33be496{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:13,408 DEBUG [M:0;jenkins-hbase4:43291] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 22:16:13,408 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 22:16:13,408 DEBUG [M:0;jenkins-hbase4:43291] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 22:16:13,408 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286570731] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286570731,5,FailOnTimeoutGroup] 2023-07-13 22:16:13,408 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286570735] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286570735,5,FailOnTimeoutGroup] 2023-07-13 22:16:13,409 INFO [M:0;jenkins-hbase4:43291] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 22:16:13,410 INFO [M:0;jenkins-hbase4:43291] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 22:16:13,410 INFO [M:0;jenkins-hbase4:43291] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:13,410 DEBUG [M:0;jenkins-hbase4:43291] master.HMaster(1512): Stopping service threads 2023-07-13 22:16:13,410 INFO [M:0;jenkins-hbase4:43291] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 22:16:13,410 ERROR [M:0;jenkins-hbase4:43291] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 22:16:13,411 INFO [M:0;jenkins-hbase4:43291] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 22:16:13,411 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 22:16:13,411 DEBUG [M:0;jenkins-hbase4:43291] zookeeper.ZKUtil(398): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 22:16:13,411 WARN [M:0;jenkins-hbase4:43291] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 22:16:13,411 INFO [M:0;jenkins-hbase4:43291] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 22:16:13,411 INFO [M:0;jenkins-hbase4:43291] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 22:16:13,412 DEBUG [M:0;jenkins-hbase4:43291] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 22:16:13,412 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:13,412 DEBUG [M:0;jenkins-hbase4:43291] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:13,412 DEBUG [M:0;jenkins-hbase4:43291] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 22:16:13,412 DEBUG [M:0;jenkins-hbase4:43291] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:13,412 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.16 KB 2023-07-13 22:16:13,431 INFO [M:0;jenkins-hbase4:43291] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4305c4f609a344b8bc45fc5c6e9435e2 2023-07-13 22:16:13,437 DEBUG [M:0;jenkins-hbase4:43291] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4305c4f609a344b8bc45fc5c6e9435e2 as hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4305c4f609a344b8bc45fc5c6e9435e2 2023-07-13 22:16:13,442 INFO [M:0;jenkins-hbase4:43291] regionserver.HStore(1080): Added hdfs://localhost:44513/user/jenkins/test-data/62c8c510-c853-816d-7e27-670e152a1ced/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4305c4f609a344b8bc45fc5c6e9435e2, entries=24, sequenceid=194, filesize=12.4 K 2023-07-13 22:16:13,443 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95228, heapSize ~109.14 KB/111760, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=194, compaction requested=false 2023-07-13 22:16:13,448 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:13,448 DEBUG [M:0;jenkins-hbase4:43291] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:13,454 INFO [M:0;jenkins-hbase4:43291] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 22:16:13,454 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:13,455 INFO [M:0;jenkins-hbase4:43291] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43291 2023-07-13 22:16:13,460 DEBUG [M:0;jenkins-hbase4:43291] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43291,1689286569634 already deleted, retry=false 2023-07-13 22:16:13,728 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:13,728 INFO [M:0;jenkins-hbase4:43291] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43291,1689286569634; zookeeper connection closed. 2023-07-13 22:16:13,728 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): master:43291-0x10160c1f18b0000, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:13,829 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:13,829 INFO [RS:1;jenkins-hbase4:42899] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42899,1689286570025; zookeeper connection closed. 2023-07-13 22:16:13,829 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10160c1f18b0002, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:13,831 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@12034002] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@12034002 2023-07-13 22:16:13,929 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:13,929 INFO [RS:0;jenkins-hbase4:44461] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44461,1689286569818; zookeeper connection closed. 2023-07-13 22:16:13,929 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:44461-0x10160c1f18b0001, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:13,929 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@329e5c04] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@329e5c04 2023-07-13 22:16:14,029 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:14,029 INFO [RS:2;jenkins-hbase4:46209] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46209,1689286570265; zookeeper connection closed. 2023-07-13 22:16:14,029 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): regionserver:46209-0x10160c1f18b0003, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:14,030 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5a72fe9e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5a72fe9e 2023-07-13 22:16:14,030 INFO [Listener at localhost/33829] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-13 22:16:14,030 WARN [Listener at localhost/33829] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:14,034 INFO [Listener at localhost/33829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:14,139 WARN [BP-1119356583-172.31.14.131-1689286568608 heartbeating to localhost/127.0.0.1:44513] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:14,140 WARN [BP-1119356583-172.31.14.131-1689286568608 heartbeating to localhost/127.0.0.1:44513] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1119356583-172.31.14.131-1689286568608 (Datanode Uuid 0f27d696-7191-4d6e-866b-9cec117d49d5) service to localhost/127.0.0.1:44513 2023-07-13 22:16:14,140 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/dfs/data/data5/current/BP-1119356583-172.31.14.131-1689286568608] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:14,141 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/dfs/data/data6/current/BP-1119356583-172.31.14.131-1689286568608] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:14,144 WARN [Listener at localhost/33829] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:14,154 INFO [Listener at localhost/33829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:14,259 WARN [BP-1119356583-172.31.14.131-1689286568608 heartbeating to localhost/127.0.0.1:44513] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:14,260 WARN [BP-1119356583-172.31.14.131-1689286568608 heartbeating to localhost/127.0.0.1:44513] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1119356583-172.31.14.131-1689286568608 (Datanode Uuid 402c5051-9113-47d5-9d13-c2ecccbcdf63) service to localhost/127.0.0.1:44513 2023-07-13 22:16:14,261 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/dfs/data/data3/current/BP-1119356583-172.31.14.131-1689286568608] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:14,261 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/dfs/data/data4/current/BP-1119356583-172.31.14.131-1689286568608] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:14,262 WARN [Listener at localhost/33829] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:14,266 INFO [Listener at localhost/33829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:14,271 WARN [BP-1119356583-172.31.14.131-1689286568608 heartbeating to localhost/127.0.0.1:44513] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:14,271 WARN [BP-1119356583-172.31.14.131-1689286568608 heartbeating to localhost/127.0.0.1:44513] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1119356583-172.31.14.131-1689286568608 (Datanode Uuid 0647ff0b-7458-41f2-b0eb-39553a485c1a) service to localhost/127.0.0.1:44513 2023-07-13 22:16:14,273 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/dfs/data/data1/current/BP-1119356583-172.31.14.131-1689286568608] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:14,273 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/cluster_feef0bc5-4798-159d-f96b-1165c3947da7/dfs/data/data2/current/BP-1119356583-172.31.14.131-1689286568608] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:14,286 INFO [Listener at localhost/33829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:14,402 INFO [Listener at localhost/33829] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 22:16:14,443 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-13 22:16:14,443 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 22:16:14,443 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.log.dir so I do NOT create it in target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0 2023-07-13 22:16:14,444 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d1e4d228-897c-b9ae-57a1-e3ae01806f5a/hadoop.tmp.dir so I do NOT create it in target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0 2023-07-13 22:16:14,444 INFO [Listener at localhost/33829] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990, deleteOnExit=true 2023-07-13 22:16:14,444 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 22:16:14,444 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/test.cache.data in system properties and HBase conf 2023-07-13 22:16:14,444 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 22:16:14,444 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir in system properties and HBase conf 2023-07-13 22:16:14,445 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 22:16:14,445 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 22:16:14,445 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 22:16:14,445 DEBUG [Listener at localhost/33829] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 22:16:14,445 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 22:16:14,445 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 22:16:14,445 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/nfs.dump.dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 22:16:14,446 INFO [Listener at localhost/33829] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 22:16:14,450 WARN [Listener at localhost/33829] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 22:16:14,451 WARN [Listener at localhost/33829] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 22:16:14,494 WARN [Listener at localhost/33829] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:14,497 INFO [Listener at localhost/33829] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:14,500 DEBUG [Listener at localhost/33829-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10160c1f18b000a, quorum=127.0.0.1:50537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 22:16:14,500 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10160c1f18b000a, quorum=127.0.0.1:50537, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 22:16:14,502 INFO [Listener at localhost/33829] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/Jetty_localhost_38383_hdfs____3ps7mb/webapp 2023-07-13 22:16:14,599 INFO [Listener at localhost/33829] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38383 2023-07-13 22:16:14,604 WARN [Listener at localhost/33829] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 22:16:14,604 WARN [Listener at localhost/33829] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 22:16:14,642 WARN [Listener at localhost/40407] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:14,653 WARN [Listener at localhost/40407] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:16:14,658 WARN [Listener at localhost/40407] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:14,660 INFO [Listener at localhost/40407] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:14,664 INFO [Listener at localhost/40407] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/Jetty_localhost_38511_datanode____ynqhrr/webapp 2023-07-13 22:16:14,756 INFO [Listener at localhost/40407] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38511 2023-07-13 22:16:14,764 WARN [Listener at localhost/34055] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:14,780 WARN [Listener at localhost/34055] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:16:14,783 WARN [Listener at localhost/34055] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:14,784 INFO [Listener at localhost/34055] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:14,789 INFO [Listener at localhost/34055] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/Jetty_localhost_46527_datanode____.yku55t/webapp 2023-07-13 22:16:14,870 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e88de176894ecd8: Processing first storage report for DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2 from datanode 323e90a0-9f34-4753-97a7-5085f0ae2659 2023-07-13 22:16:14,870 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e88de176894ecd8: from storage DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2 node DatanodeRegistration(127.0.0.1:36163, datanodeUuid=323e90a0-9f34-4753-97a7-5085f0ae2659, infoPort=38301, infoSecurePort=0, ipcPort=34055, storageInfo=lv=-57;cid=testClusterID;nsid=1614123167;c=1689286574453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:14,870 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e88de176894ecd8: Processing first storage report for DS-1a9b692a-9f45-4308-8d10-a936611c74d6 from datanode 323e90a0-9f34-4753-97a7-5085f0ae2659 2023-07-13 22:16:14,870 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e88de176894ecd8: from storage DS-1a9b692a-9f45-4308-8d10-a936611c74d6 node DatanodeRegistration(127.0.0.1:36163, datanodeUuid=323e90a0-9f34-4753-97a7-5085f0ae2659, infoPort=38301, infoSecurePort=0, ipcPort=34055, storageInfo=lv=-57;cid=testClusterID;nsid=1614123167;c=1689286574453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:14,917 INFO [Listener at localhost/34055] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46527 2023-07-13 22:16:14,941 WARN [Listener at localhost/35819] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:14,961 WARN [Listener at localhost/35819] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 22:16:14,963 WARN [Listener at localhost/35819] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 22:16:14,964 INFO [Listener at localhost/35819] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 22:16:14,968 INFO [Listener at localhost/35819] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/Jetty_localhost_37167_datanode____eillth/webapp 2023-07-13 22:16:15,041 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8190019ae0781a3d: Processing first storage report for DS-24242fd8-358a-4187-a1f5-9a5588ed2305 from datanode c96fce37-487f-4275-8b94-56fd4fd426bc 2023-07-13 22:16:15,041 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8190019ae0781a3d: from storage DS-24242fd8-358a-4187-a1f5-9a5588ed2305 node DatanodeRegistration(127.0.0.1:44189, datanodeUuid=c96fce37-487f-4275-8b94-56fd4fd426bc, infoPort=38379, infoSecurePort=0, ipcPort=35819, storageInfo=lv=-57;cid=testClusterID;nsid=1614123167;c=1689286574453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:15,041 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8190019ae0781a3d: Processing first storage report for DS-db7e4896-265e-42f1-9372-e9ad4397da42 from datanode c96fce37-487f-4275-8b94-56fd4fd426bc 2023-07-13 22:16:15,041 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8190019ae0781a3d: from storage DS-db7e4896-265e-42f1-9372-e9ad4397da42 node DatanodeRegistration(127.0.0.1:44189, datanodeUuid=c96fce37-487f-4275-8b94-56fd4fd426bc, infoPort=38379, infoSecurePort=0, ipcPort=35819, storageInfo=lv=-57;cid=testClusterID;nsid=1614123167;c=1689286574453), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 22:16:15,068 INFO [Listener at localhost/35819] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37167 2023-07-13 22:16:15,075 WARN [Listener at localhost/43483] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 22:16:15,189 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xadd21a0a3c0b8439: Processing first storage report for DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7 from datanode 2ab024b8-beb0-483e-8f8e-5f86e08a9f46 2023-07-13 22:16:15,189 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xadd21a0a3c0b8439: from storage DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7 node DatanodeRegistration(127.0.0.1:40463, datanodeUuid=2ab024b8-beb0-483e-8f8e-5f86e08a9f46, infoPort=43965, infoSecurePort=0, ipcPort=43483, storageInfo=lv=-57;cid=testClusterID;nsid=1614123167;c=1689286574453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:15,189 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xadd21a0a3c0b8439: Processing first storage report for DS-b3b7d286-786a-42d6-af9d-c2f2b61c9d55 from datanode 2ab024b8-beb0-483e-8f8e-5f86e08a9f46 2023-07-13 22:16:15,189 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xadd21a0a3c0b8439: from storage DS-b3b7d286-786a-42d6-af9d-c2f2b61c9d55 node DatanodeRegistration(127.0.0.1:40463, datanodeUuid=2ab024b8-beb0-483e-8f8e-5f86e08a9f46, infoPort=43965, infoSecurePort=0, ipcPort=43483, storageInfo=lv=-57;cid=testClusterID;nsid=1614123167;c=1689286574453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 22:16:15,285 DEBUG [Listener at localhost/43483] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0 2023-07-13 22:16:15,289 INFO [Listener at localhost/43483] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/zookeeper_0, clientPort=63373, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 22:16:15,290 INFO [Listener at localhost/43483] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63373 2023-07-13 22:16:15,291 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,291 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,306 INFO [Listener at localhost/43483] util.FSUtils(471): Created version file at hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18 with version=8 2023-07-13 22:16:15,306 INFO [Listener at localhost/43483] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42191/user/jenkins/test-data/a3835b7d-f33a-58ac-bbea-f495b712e77c/hbase-staging 2023-07-13 22:16:15,307 DEBUG [Listener at localhost/43483] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 22:16:15,307 DEBUG [Listener at localhost/43483] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 22:16:15,307 DEBUG [Listener at localhost/43483] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 22:16:15,307 DEBUG [Listener at localhost/43483] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:15,308 INFO [Listener at localhost/43483] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:15,309 INFO [Listener at localhost/43483] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45763 2023-07-13 22:16:15,309 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,310 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,311 INFO [Listener at localhost/43483] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45763 connecting to ZooKeeper ensemble=127.0.0.1:63373 2023-07-13 22:16:15,320 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:457630x0, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:15,321 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45763-0x10160c207bc0000 connected 2023-07-13 22:16:15,345 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:15,346 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:15,346 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:15,347 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45763 2023-07-13 22:16:15,350 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45763 2023-07-13 22:16:15,350 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45763 2023-07-13 22:16:15,351 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45763 2023-07-13 22:16:15,351 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45763 2023-07-13 22:16:15,353 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:15,353 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:15,353 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:15,354 INFO [Listener at localhost/43483] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 22:16:15,354 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:15,354 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:15,354 INFO [Listener at localhost/43483] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:15,355 INFO [Listener at localhost/43483] http.HttpServer(1146): Jetty bound to port 33509 2023-07-13 22:16:15,355 INFO [Listener at localhost/43483] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:15,362 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,363 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b59d690{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:15,363 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,363 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69e30e67{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:15,480 INFO [Listener at localhost/43483] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:15,482 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:15,482 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:15,483 INFO [Listener at localhost/43483] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 22:16:15,483 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,485 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@34d64535{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/jetty-0_0_0_0-33509-hbase-server-2_4_18-SNAPSHOT_jar-_-any-807106276303055793/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 22:16:15,486 INFO [Listener at localhost/43483] server.AbstractConnector(333): Started ServerConnector@74189b3b{HTTP/1.1, (http/1.1)}{0.0.0.0:33509} 2023-07-13 22:16:15,486 INFO [Listener at localhost/43483] server.Server(415): Started @42135ms 2023-07-13 22:16:15,486 INFO [Listener at localhost/43483] master.HMaster(444): hbase.rootdir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18, hbase.cluster.distributed=false 2023-07-13 22:16:15,499 INFO [Listener at localhost/43483] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:15,500 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,500 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,500 INFO [Listener at localhost/43483] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:15,500 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,500 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:15,500 INFO [Listener at localhost/43483] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:15,502 INFO [Listener at localhost/43483] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34769 2023-07-13 22:16:15,502 INFO [Listener at localhost/43483] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:15,503 DEBUG [Listener at localhost/43483] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:15,504 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,505 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,506 INFO [Listener at localhost/43483] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34769 connecting to ZooKeeper ensemble=127.0.0.1:63373 2023-07-13 22:16:15,510 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:347690x0, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:15,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34769-0x10160c207bc0001 connected 2023-07-13 22:16:15,511 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:15,512 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:15,512 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:15,514 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34769 2023-07-13 22:16:15,515 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34769 2023-07-13 22:16:15,515 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34769 2023-07-13 22:16:15,515 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34769 2023-07-13 22:16:15,515 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34769 2023-07-13 22:16:15,517 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:15,517 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:15,517 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:15,518 INFO [Listener at localhost/43483] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:15,518 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:15,518 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:15,518 INFO [Listener at localhost/43483] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:15,519 INFO [Listener at localhost/43483] http.HttpServer(1146): Jetty bound to port 44583 2023-07-13 22:16:15,519 INFO [Listener at localhost/43483] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:15,522 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,522 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@77aa7250{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:15,523 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,523 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2e0aae4b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:15,638 INFO [Listener at localhost/43483] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:15,639 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:15,639 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:15,639 INFO [Listener at localhost/43483] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:16:15,640 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,641 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@138d4621{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/jetty-0_0_0_0-44583-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3280280849261129316/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:15,643 INFO [Listener at localhost/43483] server.AbstractConnector(333): Started ServerConnector@63684f13{HTTP/1.1, (http/1.1)}{0.0.0.0:44583} 2023-07-13 22:16:15,643 INFO [Listener at localhost/43483] server.Server(415): Started @42292ms 2023-07-13 22:16:15,655 INFO [Listener at localhost/43483] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:15,656 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,656 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,656 INFO [Listener at localhost/43483] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:15,656 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,656 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:15,656 INFO [Listener at localhost/43483] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:15,657 INFO [Listener at localhost/43483] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43005 2023-07-13 22:16:15,657 INFO [Listener at localhost/43483] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:15,658 DEBUG [Listener at localhost/43483] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:15,659 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,660 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,660 INFO [Listener at localhost/43483] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43005 connecting to ZooKeeper ensemble=127.0.0.1:63373 2023-07-13 22:16:15,664 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:430050x0, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:15,665 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:430050x0, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:15,666 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43005-0x10160c207bc0002 connected 2023-07-13 22:16:15,666 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:15,666 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:15,667 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43005 2023-07-13 22:16:15,667 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43005 2023-07-13 22:16:15,667 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43005 2023-07-13 22:16:15,668 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43005 2023-07-13 22:16:15,668 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43005 2023-07-13 22:16:15,670 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:15,670 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:15,670 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:15,670 INFO [Listener at localhost/43483] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:15,670 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:15,670 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:15,671 INFO [Listener at localhost/43483] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:15,671 INFO [Listener at localhost/43483] http.HttpServer(1146): Jetty bound to port 37109 2023-07-13 22:16:15,671 INFO [Listener at localhost/43483] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:15,675 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,675 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@13211c10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:15,676 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,676 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@60c861b8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:15,791 INFO [Listener at localhost/43483] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:15,792 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:15,793 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:15,793 INFO [Listener at localhost/43483] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 22:16:15,794 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,794 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@26105268{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/jetty-0_0_0_0-37109-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6771811211649071724/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:15,797 INFO [Listener at localhost/43483] server.AbstractConnector(333): Started ServerConnector@30e74a47{HTTP/1.1, (http/1.1)}{0.0.0.0:37109} 2023-07-13 22:16:15,797 INFO [Listener at localhost/43483] server.Server(415): Started @42446ms 2023-07-13 22:16:15,809 INFO [Listener at localhost/43483] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:15,809 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,809 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,809 INFO [Listener at localhost/43483] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:15,809 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:15,809 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:15,810 INFO [Listener at localhost/43483] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:15,810 INFO [Listener at localhost/43483] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35629 2023-07-13 22:16:15,811 INFO [Listener at localhost/43483] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:15,812 DEBUG [Listener at localhost/43483] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:15,812 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,813 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,814 INFO [Listener at localhost/43483] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35629 connecting to ZooKeeper ensemble=127.0.0.1:63373 2023-07-13 22:16:15,819 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:356290x0, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:15,820 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:356290x0, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:15,821 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:356290x0, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:15,821 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35629-0x10160c207bc0003 connected 2023-07-13 22:16:15,821 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:15,822 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35629 2023-07-13 22:16:15,822 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35629 2023-07-13 22:16:15,822 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35629 2023-07-13 22:16:15,826 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35629 2023-07-13 22:16:15,826 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35629 2023-07-13 22:16:15,828 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:15,828 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:15,828 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:15,828 INFO [Listener at localhost/43483] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:15,828 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:15,829 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:15,829 INFO [Listener at localhost/43483] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:15,829 INFO [Listener at localhost/43483] http.HttpServer(1146): Jetty bound to port 42821 2023-07-13 22:16:15,829 INFO [Listener at localhost/43483] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:15,833 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,833 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@44c46cf5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:15,833 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,833 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6bdfaecf{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:15,949 INFO [Listener at localhost/43483] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:15,950 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:15,950 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:15,950 INFO [Listener at localhost/43483] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 22:16:15,951 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:15,951 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3dc1dbf8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/jetty-0_0_0_0-42821-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5558483792263493888/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:15,953 INFO [Listener at localhost/43483] server.AbstractConnector(333): Started ServerConnector@7df6aaec{HTTP/1.1, (http/1.1)}{0.0.0.0:42821} 2023-07-13 22:16:15,953 INFO [Listener at localhost/43483] server.Server(415): Started @42602ms 2023-07-13 22:16:15,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:15,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@149a369b{HTTP/1.1, (http/1.1)}{0.0.0.0:35305} 2023-07-13 22:16:15,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42608ms 2023-07-13 22:16:15,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:15,960 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 22:16:15,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:15,962 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:15,962 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:15,962 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:15,962 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:15,963 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:15,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:16:15,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45763,1689286575307 from backup master directory 2023-07-13 22:16:15,966 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:16:15,967 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:15,967 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 22:16:15,967 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:15,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:15,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/hbase.id with ID: a1a653bf-f415-4879-aff0-f78987b4b2f9 2023-07-13 22:16:15,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:15,996 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,005 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x00f977bc to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:16,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79c05905, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:16,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:16,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 22:16:16,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:16,010 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store-tmp 2023-07-13 22:16:16,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 22:16:16,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:16,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:16,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 22:16:16,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:16,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:16,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:16,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/WALs/jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:16,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45763%2C1689286575307, suffix=, logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/WALs/jenkins-hbase4.apache.org,45763,1689286575307, archiveDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/oldWALs, maxLogs=10 2023-07-13 22:16:16,033 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK] 2023-07-13 22:16:16,034 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK] 2023-07-13 22:16:16,037 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK] 2023-07-13 22:16:16,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/WALs/jenkins-hbase4.apache.org,45763,1689286575307/jenkins-hbase4.apache.org%2C45763%2C1689286575307.1689286576020 2023-07-13 22:16:16,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK], DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK]] 2023-07-13 22:16:16,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:16,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:16,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:16,040 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:16,041 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 22:16:16,042 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 22:16:16,042 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:16,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:16,045 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 22:16:16,047 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:16,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9698086880, jitterRate=-0.09679527580738068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:16,047 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:16,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 22:16:16,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 22:16:16,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 22:16:16,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 22:16:16,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 22:16:16,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-13 22:16:16,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 22:16:16,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 22:16:16,050 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 22:16:16,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 22:16:16,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 22:16:16,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 22:16:16,053 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 22:16:16,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 22:16:16,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 22:16:16,055 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:16,055 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:16,055 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:16,055 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:16,055 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45763,1689286575307, sessionid=0x10160c207bc0000, setting cluster-up flag (Was=false) 2023-07-13 22:16:16,060 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 22:16:16,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:16,068 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,072 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 22:16:16,073 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:16,074 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.hbase-snapshot/.tmp 2023-07-13 22:16:16,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 22:16:16,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 22:16:16,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 22:16:16,075 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:16,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 22:16:16,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 22:16:16,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 22:16:16,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 22:16:16,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 22:16:16,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:16,089 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689286606091 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 22:16:16,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 22:16:16,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 22:16:16,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 22:16:16,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 22:16:16,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 22:16:16,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286576093,5,FailOnTimeoutGroup] 2023-07-13 22:16:16,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286576093,5,FailOnTimeoutGroup] 2023-07-13 22:16:16,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 22:16:16,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,096 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 22:16:16,096 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 22:16:16,097 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:16,116 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:16,117 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:16,117 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18 2023-07-13 22:16:16,135 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,137 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:16:16,138 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/info 2023-07-13 22:16:16,139 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:16:16,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:16:16,141 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:16:16,141 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:16:16,142 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,142 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:16:16,143 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/table 2023-07-13 22:16:16,144 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:16:16,144 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,145 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740 2023-07-13 22:16:16,145 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740 2023-07-13 22:16:16,148 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:16:16,149 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:16:16,151 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:16,153 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10790370080, jitterRate=0.004931524395942688}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:16:16,154 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:16:16,160 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:16:16,160 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:16:16,160 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:16:16,161 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:16:16,161 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:16:16,161 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:16,161 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:16:16,162 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 22:16:16,162 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 22:16:16,162 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 22:16:16,164 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 22:16:16,165 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 22:16:16,167 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(951): ClusterId : a1a653bf-f415-4879-aff0-f78987b4b2f9 2023-07-13 22:16:16,167 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:16,167 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(951): ClusterId : a1a653bf-f415-4879-aff0-f78987b4b2f9 2023-07-13 22:16:16,167 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(951): ClusterId : a1a653bf-f415-4879-aff0-f78987b4b2f9 2023-07-13 22:16:16,167 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:16,167 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:16,169 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:16,169 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:16,170 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:16,170 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:16,172 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:16,174 DEBUG [RS:1;jenkins-hbase4:43005] zookeeper.ReadOnlyZKClient(139): Connect 0x5bbc9bb4 to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:16,174 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:16,174 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:16,174 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:16,177 DEBUG [RS:2;jenkins-hbase4:35629] zookeeper.ReadOnlyZKClient(139): Connect 0x6e46f61a to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:16,177 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:16,178 DEBUG [RS:0;jenkins-hbase4:34769] zookeeper.ReadOnlyZKClient(139): Connect 0x22ca3124 to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:16,191 DEBUG [RS:2;jenkins-hbase4:35629] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f08d292, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:16,191 DEBUG [RS:2;jenkins-hbase4:35629] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f62b9a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:16,192 DEBUG [RS:0;jenkins-hbase4:34769] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c0ddb0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:16,193 DEBUG [RS:0;jenkins-hbase4:34769] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28325a54, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:16,195 DEBUG [RS:1;jenkins-hbase4:43005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@557b7eb3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:16,195 DEBUG [RS:1;jenkins-hbase4:43005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5144f0c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:16,203 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34769 2023-07-13 22:16:16,203 INFO [RS:0;jenkins-hbase4:34769] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:16,203 INFO [RS:0;jenkins-hbase4:34769] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:16,203 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:16,207 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43005 2023-07-13 22:16:16,207 INFO [RS:1;jenkins-hbase4:43005] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:16,207 INFO [RS:1;jenkins-hbase4:43005] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:16,207 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:16,208 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45763,1689286575307 with isa=jenkins-hbase4.apache.org/172.31.14.131:43005, startcode=1689286575655 2023-07-13 22:16:16,208 DEBUG [RS:1;jenkins-hbase4:43005] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:16,210 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47609, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:16,212 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45763] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,212 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45763,1689286575307 with isa=jenkins-hbase4.apache.org/172.31.14.131:34769, startcode=1689286575499 2023-07-13 22:16:16,212 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:16,212 DEBUG [RS:0;jenkins-hbase4:34769] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:16,213 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 22:16:16,213 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18 2023-07-13 22:16:16,213 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40407 2023-07-13 22:16:16,213 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33509 2023-07-13 22:16:16,214 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35629 2023-07-13 22:16:16,214 INFO [RS:2;jenkins-hbase4:35629] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:16,214 INFO [RS:2;jenkins-hbase4:35629] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:16,214 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:16,215 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:16,215 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45763,1689286575307 with isa=jenkins-hbase4.apache.org/172.31.14.131:35629, startcode=1689286575808 2023-07-13 22:16:16,215 DEBUG [RS:2;jenkins-hbase4:35629] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:16,215 DEBUG [RS:1;jenkins-hbase4:43005] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,215 WARN [RS:1;jenkins-hbase4:43005] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:16,216 INFO [RS:1;jenkins-hbase4:43005] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:16,216 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,219 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55055, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:16,219 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57015, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:16,219 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45763] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,219 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:16,219 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 22:16:16,219 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45763] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,219 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:16,219 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18 2023-07-13 22:16:16,219 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 22:16:16,219 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40407 2023-07-13 22:16:16,220 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33509 2023-07-13 22:16:16,220 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18 2023-07-13 22:16:16,220 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40407 2023-07-13 22:16:16,220 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33509 2023-07-13 22:16:16,228 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43005,1689286575655] 2023-07-13 22:16:16,229 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35629,1689286575808] 2023-07-13 22:16:16,229 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34769,1689286575499] 2023-07-13 22:16:16,231 DEBUG [RS:2;jenkins-hbase4:35629] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,231 WARN [RS:2;jenkins-hbase4:35629] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:16,231 INFO [RS:2;jenkins-hbase4:35629] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:16,232 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,232 DEBUG [RS:0;jenkins-hbase4:34769] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,232 DEBUG [RS:1;jenkins-hbase4:43005] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,232 WARN [RS:0;jenkins-hbase4:34769] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:16,233 INFO [RS:0;jenkins-hbase4:34769] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:16,236 DEBUG [RS:1;jenkins-hbase4:43005] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,236 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,237 DEBUG [RS:1;jenkins-hbase4:43005] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,242 DEBUG [RS:2;jenkins-hbase4:35629] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,245 DEBUG [RS:2;jenkins-hbase4:35629] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,245 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:16,245 INFO [RS:1;jenkins-hbase4:43005] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:16,250 DEBUG [RS:2;jenkins-hbase4:35629] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,250 DEBUG [RS:0;jenkins-hbase4:34769] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,250 DEBUG [RS:0;jenkins-hbase4:34769] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,251 INFO [RS:1;jenkins-hbase4:43005] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:16,251 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:16,251 DEBUG [RS:0;jenkins-hbase4:34769] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,251 INFO [RS:2;jenkins-hbase4:35629] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:16,252 DEBUG [RS:0;jenkins-hbase4:34769] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:16,252 INFO [RS:0;jenkins-hbase4:34769] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:16,255 INFO [RS:1;jenkins-hbase4:43005] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:16,255 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,258 INFO [RS:0;jenkins-hbase4:34769] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:16,261 INFO [RS:2;jenkins-hbase4:35629] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:16,262 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:16,263 INFO [RS:0;jenkins-hbase4:34769] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:16,263 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,271 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:16,271 INFO [RS:2;jenkins-hbase4:35629] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:16,271 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,272 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:16,272 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,273 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,273 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,273 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,274 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,274 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,275 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,275 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,275 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,275 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,275 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:16,275 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:2;jenkins-hbase4:35629] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:1;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,276 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:16,277 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,277 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,277 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,277 DEBUG [RS:0;jenkins-hbase4:34769] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:16,308 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,308 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,311 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,311 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,312 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,312 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,312 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,312 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,312 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,316 DEBUG [jenkins-hbase4:45763] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 22:16:16,316 DEBUG [jenkins-hbase4:45763] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:16,316 DEBUG [jenkins-hbase4:45763] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:16,316 DEBUG [jenkins-hbase4:45763] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:16,316 DEBUG [jenkins-hbase4:45763] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:16,316 DEBUG [jenkins-hbase4:45763] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:16,319 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43005,1689286575655, state=OPENING 2023-07-13 22:16:16,321 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 22:16:16,324 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,324 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:16:16,324 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43005,1689286575655}] 2023-07-13 22:16:16,331 INFO [RS:1;jenkins-hbase4:43005] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:16,331 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1689286575655-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,334 INFO [RS:0;jenkins-hbase4:34769] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:16,334 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34769,1689286575499-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,341 INFO [RS:2;jenkins-hbase4:35629] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:16,341 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35629,1689286575808-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,358 INFO [RS:0;jenkins-hbase4:34769] regionserver.Replication(203): jenkins-hbase4.apache.org,34769,1689286575499 started 2023-07-13 22:16:16,359 INFO [RS:1;jenkins-hbase4:43005] regionserver.Replication(203): jenkins-hbase4.apache.org,43005,1689286575655 started 2023-07-13 22:16:16,359 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34769,1689286575499, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34769, sessionid=0x10160c207bc0001 2023-07-13 22:16:16,359 INFO [RS:2;jenkins-hbase4:35629] regionserver.Replication(203): jenkins-hbase4.apache.org,35629,1689286575808 started 2023-07-13 22:16:16,359 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:16,359 DEBUG [RS:0;jenkins-hbase4:34769] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,359 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43005,1689286575655, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43005, sessionid=0x10160c207bc0002 2023-07-13 22:16:16,359 DEBUG [RS:0;jenkins-hbase4:34769] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34769,1689286575499' 2023-07-13 22:16:16,359 DEBUG [RS:0;jenkins-hbase4:34769] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:16,359 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35629,1689286575808, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35629, sessionid=0x10160c207bc0003 2023-07-13 22:16:16,359 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:16,359 DEBUG [RS:1;jenkins-hbase4:43005] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,359 DEBUG [RS:1;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43005,1689286575655' 2023-07-13 22:16:16,359 DEBUG [RS:1;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:16,359 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:16,359 DEBUG [RS:2;jenkins-hbase4:35629] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,359 DEBUG [RS:2;jenkins-hbase4:35629] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35629,1689286575808' 2023-07-13 22:16:16,359 DEBUG [RS:2;jenkins-hbase4:35629] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:16,359 DEBUG [RS:0;jenkins-hbase4:34769] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:16,359 DEBUG [RS:1;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:16,360 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:16,360 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:16,360 DEBUG [RS:0;jenkins-hbase4:34769] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:16,360 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:16,360 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:16,360 DEBUG [RS:0;jenkins-hbase4:34769] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34769,1689286575499' 2023-07-13 22:16:16,360 DEBUG [RS:0;jenkins-hbase4:34769] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35629,1689286575808' 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:16,360 DEBUG [RS:1;jenkins-hbase4:43005] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,360 DEBUG [RS:1;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43005,1689286575655' 2023-07-13 22:16:16,360 DEBUG [RS:1;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:16,360 DEBUG [RS:0;jenkins-hbase4:34769] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:16,360 DEBUG [RS:2;jenkins-hbase4:35629] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:16,361 DEBUG [RS:0;jenkins-hbase4:34769] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:16,361 DEBUG [RS:1;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:16,361 INFO [RS:0;jenkins-hbase4:34769] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:16:16,361 INFO [RS:0;jenkins-hbase4:34769] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:16:16,361 DEBUG [RS:2;jenkins-hbase4:35629] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:16,361 INFO [RS:2;jenkins-hbase4:35629] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:16:16,361 INFO [RS:2;jenkins-hbase4:35629] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:16:16,361 DEBUG [RS:1;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:16,361 INFO [RS:1;jenkins-hbase4:43005] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:16:16,361 INFO [RS:1;jenkins-hbase4:43005] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:16:16,384 WARN [ReadOnlyZKClient-127.0.0.1:63373@0x00f977bc] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 22:16:16,384 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:16,385 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34684, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:16,386 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43005] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34684 deadline: 1689286636386, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,463 INFO [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43005%2C1689286575655, suffix=, logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43005,1689286575655, archiveDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs, maxLogs=32 2023-07-13 22:16:16,463 INFO [RS:2;jenkins-hbase4:35629] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35629%2C1689286575808, suffix=, logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,35629,1689286575808, archiveDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs, maxLogs=32 2023-07-13 22:16:16,463 INFO [RS:0;jenkins-hbase4:34769] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34769%2C1689286575499, suffix=, logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,34769,1689286575499, archiveDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs, maxLogs=32 2023-07-13 22:16:16,478 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK] 2023-07-13 22:16:16,480 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK] 2023-07-13 22:16:16,480 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,480 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK] 2023-07-13 22:16:16,482 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:16:16,485 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34686, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:16:16,486 INFO [RS:2;jenkins-hbase4:35629] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,35629,1689286575808/jenkins-hbase4.apache.org%2C35629%2C1689286575808.1689286576463 2023-07-13 22:16:16,487 DEBUG [RS:2;jenkins-hbase4:35629] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK], DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK], DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK]] 2023-07-13 22:16:16,495 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK] 2023-07-13 22:16:16,495 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK] 2023-07-13 22:16:16,495 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK] 2023-07-13 22:16:16,496 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK] 2023-07-13 22:16:16,496 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK] 2023-07-13 22:16:16,496 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK] 2023-07-13 22:16:16,496 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 22:16:16,496 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:16,499 INFO [RS:0;jenkins-hbase4:34769] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,34769,1689286575499/jenkins-hbase4.apache.org%2C34769%2C1689286575499.1689286576464 2023-07-13 22:16:16,499 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43005%2C1689286575655.meta, suffix=.meta, logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43005,1689286575655, archiveDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs, maxLogs=32 2023-07-13 22:16:16,500 INFO [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43005,1689286575655/jenkins-hbase4.apache.org%2C43005%2C1689286575655.1689286576463 2023-07-13 22:16:16,502 DEBUG [RS:0;jenkins-hbase4:34769] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK], DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK], DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK]] 2023-07-13 22:16:16,504 DEBUG [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK], DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK], DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK]] 2023-07-13 22:16:16,516 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK] 2023-07-13 22:16:16,516 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK] 2023-07-13 22:16:16,517 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK] 2023-07-13 22:16:16,520 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43005,1689286575655/jenkins-hbase4.apache.org%2C43005%2C1689286575655.meta.1689286576500.meta 2023-07-13 22:16:16,522 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK], DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK]] 2023-07-13 22:16:16,522 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:16,522 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:16:16,522 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 22:16:16,522 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 22:16:16,522 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 22:16:16,523 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,523 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 22:16:16,523 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 22:16:16,524 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 22:16:16,525 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/info 2023-07-13 22:16:16,525 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/info 2023-07-13 22:16:16,526 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 22:16:16,526 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,526 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 22:16:16,527 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:16:16,527 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/rep_barrier 2023-07-13 22:16:16,528 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 22:16:16,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 22:16:16,529 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/table 2023-07-13 22:16:16,529 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/table 2023-07-13 22:16:16,529 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 22:16:16,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,531 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740 2023-07-13 22:16:16,532 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740 2023-07-13 22:16:16,533 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 22:16:16,535 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 22:16:16,536 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11513731360, jitterRate=0.07229979336261749}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 22:16:16,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 22:16:16,536 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689286576480 2023-07-13 22:16:16,540 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 22:16:16,541 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 22:16:16,541 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43005,1689286575655, state=OPEN 2023-07-13 22:16:16,542 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 22:16:16,542 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 22:16:16,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 22:16:16,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43005,1689286575655 in 218 msec 2023-07-13 22:16:16,545 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 22:16:16,545 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 382 msec 2023-07-13 22:16:16,546 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 470 msec 2023-07-13 22:16:16,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689286576547, completionTime=-1 2023-07-13 22:16:16,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 22:16:16,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 22:16:16,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 22:16:16,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689286636552 2023-07-13 22:16:16,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689286696552 2023-07-13 22:16:16,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-13 22:16:16,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45763,1689286575307-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45763,1689286575307-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45763,1689286575307-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45763, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:16,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 22:16:16,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:16,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 22:16:16,558 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 22:16:16,559 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:16,560 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:16,561 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,562 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31 empty. 2023-07-13 22:16:16,562 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,562 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 22:16:16,575 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:16,576 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e82ef2be6abd0b532d876efa9e2a9c31, NAME => 'hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp 2023-07-13 22:16:16,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e82ef2be6abd0b532d876efa9e2a9c31, disabling compactions & flushes 2023-07-13 22:16:16,585 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. after waiting 0 ms 2023-07-13 22:16:16,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,585 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e82ef2be6abd0b532d876efa9e2a9c31: 2023-07-13 22:16:16,587 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:16,588 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286576588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286576588"}]},"ts":"1689286576588"} 2023-07-13 22:16:16,590 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:16,591 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:16,591 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286576591"}]},"ts":"1689286576591"} 2023-07-13 22:16:16,592 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 22:16:16,595 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:16,596 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:16,596 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:16,596 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:16,596 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:16,596 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e82ef2be6abd0b532d876efa9e2a9c31, ASSIGN}] 2023-07-13 22:16:16,597 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e82ef2be6abd0b532d876efa9e2a9c31, ASSIGN 2023-07-13 22:16:16,598 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e82ef2be6abd0b532d876efa9e2a9c31, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35629,1689286575808; forceNewPlan=false, retain=false 2023-07-13 22:16:16,685 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-13 22:16:16,690 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:16,692 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 22:16:16,694 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:16,694 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:16,696 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:16,697 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34 empty. 2023-07-13 22:16:16,697 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:16,697 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 22:16:16,714 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:16,715 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2fdbc50ddfe83bbee146394d8c1f3c34, NAME => 'hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp 2023-07-13 22:16:16,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 2fdbc50ddfe83bbee146394d8c1f3c34, disabling compactions & flushes 2023-07-13 22:16:16,740 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:16,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:16,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. after waiting 0 ms 2023-07-13 22:16:16,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:16,740 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:16,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 2fdbc50ddfe83bbee146394d8c1f3c34: 2023-07-13 22:16:16,748 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:16,748 INFO [jenkins-hbase4:45763] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:16,749 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286576749"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286576749"}]},"ts":"1689286576749"} 2023-07-13 22:16:16,751 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e82ef2be6abd0b532d876efa9e2a9c31, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,751 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286576750"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286576750"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286576750"}]},"ts":"1689286576750"} 2023-07-13 22:16:16,751 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:16,752 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure e82ef2be6abd0b532d876efa9e2a9c31, server=jenkins-hbase4.apache.org,35629,1689286575808}] 2023-07-13 22:16:16,754 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:16,754 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286576754"}]},"ts":"1689286576754"} 2023-07-13 22:16:16,755 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 22:16:16,759 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:16,759 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:16,759 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:16,759 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:16,759 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:16,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2fdbc50ddfe83bbee146394d8c1f3c34, ASSIGN}] 2023-07-13 22:16:16,760 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2fdbc50ddfe83bbee146394d8c1f3c34, ASSIGN 2023-07-13 22:16:16,764 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2fdbc50ddfe83bbee146394d8c1f3c34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43005,1689286575655; forceNewPlan=false, retain=false 2023-07-13 22:16:16,906 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,906 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 22:16:16,908 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 22:16:16,911 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e82ef2be6abd0b532d876efa9e2a9c31, NAME => 'hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:16,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:16,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,913 INFO [StoreOpener-e82ef2be6abd0b532d876efa9e2a9c31-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,914 DEBUG [StoreOpener-e82ef2be6abd0b532d876efa9e2a9c31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/info 2023-07-13 22:16:16,914 DEBUG [StoreOpener-e82ef2be6abd0b532d876efa9e2a9c31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/info 2023-07-13 22:16:16,915 INFO [jenkins-hbase4:45763] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:16,916 INFO [StoreOpener-e82ef2be6abd0b532d876efa9e2a9c31-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e82ef2be6abd0b532d876efa9e2a9c31 columnFamilyName info 2023-07-13 22:16:16,916 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=2fdbc50ddfe83bbee146394d8c1f3c34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:16,916 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286576916"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286576916"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286576916"}]},"ts":"1689286576916"} 2023-07-13 22:16:16,916 INFO [StoreOpener-e82ef2be6abd0b532d876efa9e2a9c31-1] regionserver.HStore(310): Store=e82ef2be6abd0b532d876efa9e2a9c31/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:16,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,917 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 2fdbc50ddfe83bbee146394d8c1f3c34, server=jenkins-hbase4.apache.org,43005,1689286575655}] 2023-07-13 22:16:16,918 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:16,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:16,923 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e82ef2be6abd0b532d876efa9e2a9c31; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10686352640, jitterRate=-0.004755854606628418}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:16,923 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e82ef2be6abd0b532d876efa9e2a9c31: 2023-07-13 22:16:16,923 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31., pid=7, masterSystemTime=1689286576906 2023-07-13 22:16:16,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,927 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:16,928 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e82ef2be6abd0b532d876efa9e2a9c31, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:16,928 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689286576927"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286576927"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286576927"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286576927"}]},"ts":"1689286576927"} 2023-07-13 22:16:16,930 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-13 22:16:16,930 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure e82ef2be6abd0b532d876efa9e2a9c31, server=jenkins-hbase4.apache.org,35629,1689286575808 in 177 msec 2023-07-13 22:16:16,931 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 22:16:16,931 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e82ef2be6abd0b532d876efa9e2a9c31, ASSIGN in 334 msec 2023-07-13 22:16:16,932 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:16,932 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286576932"}]},"ts":"1689286576932"} 2023-07-13 22:16:16,933 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 22:16:16,936 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:16,937 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 378 msec 2023-07-13 22:16:16,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 22:16:16,961 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:16,961 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:16,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:16,965 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58656, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:16,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 22:16:16,975 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:16,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-13 22:16:16,979 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 22:16:16,979 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-13 22:16:16,979 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 22:16:17,073 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2fdbc50ddfe83bbee146394d8c1f3c34, NAME => 'hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. service=MultiRowMutationService 2023-07-13 22:16:17,073 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,075 INFO [StoreOpener-2fdbc50ddfe83bbee146394d8c1f3c34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,076 DEBUG [StoreOpener-2fdbc50ddfe83bbee146394d8c1f3c34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/m 2023-07-13 22:16:17,076 DEBUG [StoreOpener-2fdbc50ddfe83bbee146394d8c1f3c34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/m 2023-07-13 22:16:17,076 INFO [StoreOpener-2fdbc50ddfe83bbee146394d8c1f3c34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2fdbc50ddfe83bbee146394d8c1f3c34 columnFamilyName m 2023-07-13 22:16:17,077 INFO [StoreOpener-2fdbc50ddfe83bbee146394d8c1f3c34-1] regionserver.HStore(310): Store=2fdbc50ddfe83bbee146394d8c1f3c34/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:17,077 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,078 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,080 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:17,081 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:17,082 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2fdbc50ddfe83bbee146394d8c1f3c34; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@15d0b0c7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:17,082 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2fdbc50ddfe83bbee146394d8c1f3c34: 2023-07-13 22:16:17,082 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34., pid=9, masterSystemTime=1689286577069 2023-07-13 22:16:17,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:17,084 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:17,084 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=2fdbc50ddfe83bbee146394d8c1f3c34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:17,084 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689286577084"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286577084"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286577084"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286577084"}]},"ts":"1689286577084"} 2023-07-13 22:16:17,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-13 22:16:17,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 2fdbc50ddfe83bbee146394d8c1f3c34, server=jenkins-hbase4.apache.org,43005,1689286575655 in 168 msec 2023-07-13 22:16:17,088 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-13 22:16:17,088 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2fdbc50ddfe83bbee146394d8c1f3c34, ASSIGN in 328 msec 2023-07-13 22:16:17,096 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:17,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 120 msec 2023-07-13 22:16:17,103 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:17,103 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286577103"}]},"ts":"1689286577103"} 2023-07-13 22:16:17,104 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 22:16:17,107 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:17,108 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 417 msec 2023-07-13 22:16:17,114 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 22:16:17,117 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 22:16:17,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.150sec 2023-07-13 22:16:17,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 22:16:17,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 22:16:17,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 22:16:17,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45763,1689286575307-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 22:16:17,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45763,1689286575307-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 22:16:17,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 22:16:17,162 DEBUG [Listener at localhost/43483] zookeeper.ReadOnlyZKClient(139): Connect 0x50f9ce59 to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:17,167 DEBUG [Listener at localhost/43483] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26784d6f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:17,168 DEBUG [hconnection-0x15d54c43-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:17,170 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34688, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:17,171 INFO [Listener at localhost/43483] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:17,171 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:17,196 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 22:16:17,196 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 22:16:17,201 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:17,201 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:17,202 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:16:17,203 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 22:16:17,274 DEBUG [Listener at localhost/43483] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 22:16:17,275 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50426, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 22:16:17,283 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 22:16:17,283 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:17,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 22:16:17,284 DEBUG [Listener at localhost/43483] zookeeper.ReadOnlyZKClient(139): Connect 0x0cbcd600 to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:17,289 DEBUG [Listener at localhost/43483] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33342480, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:17,289 INFO [Listener at localhost/43483] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63373 2023-07-13 22:16:17,292 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:17,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10160c207bc000a connected 2023-07-13 22:16:17,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:17,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:17,300 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 22:16:17,318 INFO [Listener at localhost/43483] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 22:16:17,318 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:17,319 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:17,319 INFO [Listener at localhost/43483] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 22:16:17,319 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 22:16:17,319 INFO [Listener at localhost/43483] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 22:16:17,319 INFO [Listener at localhost/43483] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 22:16:17,320 INFO [Listener at localhost/43483] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43583 2023-07-13 22:16:17,320 INFO [Listener at localhost/43483] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 22:16:17,322 DEBUG [Listener at localhost/43483] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 22:16:17,323 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:17,324 INFO [Listener at localhost/43483] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 22:16:17,326 INFO [Listener at localhost/43483] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43583 connecting to ZooKeeper ensemble=127.0.0.1:63373 2023-07-13 22:16:17,330 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:435830x0, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 22:16:17,331 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(162): regionserver:435830x0, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 22:16:17,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43583-0x10160c207bc000b connected 2023-07-13 22:16:17,332 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(162): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 22:16:17,333 DEBUG [Listener at localhost/43483] zookeeper.ZKUtil(164): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 22:16:17,333 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43583 2023-07-13 22:16:17,334 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43583 2023-07-13 22:16:17,334 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43583 2023-07-13 22:16:17,338 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43583 2023-07-13 22:16:17,339 DEBUG [Listener at localhost/43483] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43583 2023-07-13 22:16:17,340 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 22:16:17,341 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 22:16:17,341 INFO [Listener at localhost/43483] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 22:16:17,341 INFO [Listener at localhost/43483] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 22:16:17,342 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 22:16:17,342 INFO [Listener at localhost/43483] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 22:16:17,342 INFO [Listener at localhost/43483] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 22:16:17,343 INFO [Listener at localhost/43483] http.HttpServer(1146): Jetty bound to port 40567 2023-07-13 22:16:17,343 INFO [Listener at localhost/43483] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 22:16:17,346 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:17,346 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5526afb3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,AVAILABLE} 2023-07-13 22:16:17,346 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:17,346 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@336425c2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 22:16:17,459 INFO [Listener at localhost/43483] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 22:16:17,460 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 22:16:17,460 INFO [Listener at localhost/43483] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 22:16:17,460 INFO [Listener at localhost/43483] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 22:16:17,461 INFO [Listener at localhost/43483] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 22:16:17,461 INFO [Listener at localhost/43483] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@783f1966{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/java.io.tmpdir/jetty-0_0_0_0-40567-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4407400066237949546/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:17,463 INFO [Listener at localhost/43483] server.AbstractConnector(333): Started ServerConnector@8e66af4{HTTP/1.1, (http/1.1)}{0.0.0.0:40567} 2023-07-13 22:16:17,463 INFO [Listener at localhost/43483] server.Server(415): Started @44112ms 2023-07-13 22:16:17,466 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(951): ClusterId : a1a653bf-f415-4879-aff0-f78987b4b2f9 2023-07-13 22:16:17,466 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 22:16:17,468 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 22:16:17,468 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 22:16:17,470 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 22:16:17,473 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ReadOnlyZKClient(139): Connect 0x18a9fda6 to 127.0.0.1:63373 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 22:16:17,477 DEBUG [RS:3;jenkins-hbase4:43583] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@166e00c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 22:16:17,477 DEBUG [RS:3;jenkins-hbase4:43583] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@125196d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:17,485 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43583 2023-07-13 22:16:17,485 INFO [RS:3;jenkins-hbase4:43583] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 22:16:17,486 INFO [RS:3;jenkins-hbase4:43583] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 22:16:17,486 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 22:16:17,486 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45763,1689286575307 with isa=jenkins-hbase4.apache.org/172.31.14.131:43583, startcode=1689286577318 2023-07-13 22:16:17,486 DEBUG [RS:3;jenkins-hbase4:43583] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 22:16:17,488 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36439, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 22:16:17,488 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45763] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,488 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 22:16:17,489 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18 2023-07-13 22:16:17,489 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40407 2023-07-13 22:16:17,489 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33509 2023-07-13 22:16:17,493 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:17,493 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:17,493 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:17,493 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:17,493 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:17,494 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ZKUtil(162): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,494 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43583,1689286577318] 2023-07-13 22:16:17,494 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 22:16:17,494 WARN [RS:3;jenkins-hbase4:43583] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 22:16:17,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:17,494 INFO [RS:3;jenkins-hbase4:43583] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 22:16:17,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:17,495 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,495 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:17,495 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 22:16:17,495 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:17,496 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:17,496 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:17,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:17,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:17,498 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:17,499 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ZKUtil(162): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:17,499 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ZKUtil(162): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:17,500 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ZKUtil(162): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,500 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ZKUtil(162): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:17,501 DEBUG [RS:3;jenkins-hbase4:43583] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 22:16:17,501 INFO [RS:3;jenkins-hbase4:43583] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 22:16:17,502 INFO [RS:3;jenkins-hbase4:43583] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 22:16:17,502 INFO [RS:3;jenkins-hbase4:43583] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 22:16:17,502 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:17,502 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 22:16:17,504 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:17,504 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,504 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,504 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,505 DEBUG [RS:3;jenkins-hbase4:43583] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 22:16:17,506 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:17,506 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:17,506 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:17,517 INFO [RS:3;jenkins-hbase4:43583] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 22:16:17,517 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43583,1689286577318-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 22:16:17,527 INFO [RS:3;jenkins-hbase4:43583] regionserver.Replication(203): jenkins-hbase4.apache.org,43583,1689286577318 started 2023-07-13 22:16:17,527 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43583,1689286577318, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43583, sessionid=0x10160c207bc000b 2023-07-13 22:16:17,527 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 22:16:17,527 DEBUG [RS:3;jenkins-hbase4:43583] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,527 DEBUG [RS:3;jenkins-hbase4:43583] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43583,1689286577318' 2023-07-13 22:16:17,527 DEBUG [RS:3;jenkins-hbase4:43583] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 22:16:17,528 DEBUG [RS:3;jenkins-hbase4:43583] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 22:16:17,528 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 22:16:17,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:17,528 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 22:16:17,528 DEBUG [RS:3;jenkins-hbase4:43583] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:17,528 DEBUG [RS:3;jenkins-hbase4:43583] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43583,1689286577318' 2023-07-13 22:16:17,528 DEBUG [RS:3;jenkins-hbase4:43583] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 22:16:17,529 DEBUG [RS:3;jenkins-hbase4:43583] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 22:16:17,529 DEBUG [RS:3;jenkins-hbase4:43583] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 22:16:17,529 INFO [RS:3;jenkins-hbase4:43583] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 22:16:17,529 INFO [RS:3;jenkins-hbase4:43583] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 22:16:17,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:17,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:17,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:17,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:17,535 DEBUG [hconnection-0x3699dcf7-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 22:16:17,536 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 22:16:17,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:17,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:17,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:17,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:17,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50426 deadline: 1689287777544, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:17,545 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:17,546 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:17,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:17,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:17,547 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:17,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:17,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:17,594 INFO [Listener at localhost/43483] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=557 (was 515) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1832481523-2247 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120492832-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2133f866-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:40407 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/43483-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:35629Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e458899-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120492832-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33829-SendThread(127.0.0.1:50537) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: Listener at localhost/43483-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33829-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@44a42d68[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp369896903-2215-acceptor-0@58043a26-ServerConnector@74189b3b{HTTP/1.1, (http/1.1)}{0.0.0.0:33509} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data4/current/BP-1973182020-172.31.14.131-1689286574453 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x6e46f61a-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 43483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50537@0x6fbb5489-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120492832-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1832481523-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp369896903-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1454254475-2276-acceptor-0@7bf943bf-ServerConnector@30e74a47{HTTP/1.1, (http/1.1)}{0.0.0.0:37109} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp369896903-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x18a9fda6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x18a9fda6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1490576280-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43291,1689286569634 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: hconnection-0x3e458899-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50537@0x6fbb5489 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp442358688-2321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50537@0x6fbb5489-SendThread(127.0.0.1:50537) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:40407 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:63373): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 0 on default port 43483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43483 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1320284695_17 at /127.0.0.1:42012 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-56577941_17 at /127.0.0.1:39036 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1454254475-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data3/current/BP-1973182020-172.31.14.131-1689286574453 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1454254475-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-681431082_17 at /127.0.0.1:39064 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x5bbc9bb4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120492832-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x22ca3124-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-955135d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34055 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:44513 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp369896903-2218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e458899-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1454254475-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:40407 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-56577941_17 at /127.0.0.1:46644 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1832481523-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43483 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x15d54c43-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40407 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:40407 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x50f9ce59-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:40407 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1419503034@qtp-1187409731-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38383 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp369896903-2216 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1454254475-2275 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3699dcf7-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e458899-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:43005 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x00f977bc-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44513 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e458899-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1808403086@qtp-1339636449-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46527 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: jenkins-hbase4:43005Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1490576280-2585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43583 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:46610 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:34769 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1454254475-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-96c7151-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@7b27bc97 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: M:0;jenkins-hbase4:45763 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp369896903-2217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1490576280-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@373aacad java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp442358688-2316 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@79f001a0 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@592e1b3c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:44513 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:39104 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35629 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:44513 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/43483.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18-prefix:jenkins-hbase4.apache.org,35629,1689286575808 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34769Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x6e46f61a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:44513 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@528d9088 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1320284695_17 at /127.0.0.1:46692 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@15ada401[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@46eb3fc1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286576093 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1490576280-2586 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:40407 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:40407 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x00f977bc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1832481523-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:40407 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2120492832-2306-acceptor-0@1f2f9c21-ServerConnector@7df6aaec{HTTP/1.1, (http/1.1)}{0.0.0.0:42821} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 43483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1764966600@qtp-1187409731-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2739f239 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3faf32ab java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43483-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e458899-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1708579834@qtp-905544834-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp369896903-2214 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:63373 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 1053059318@qtp-905544834-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37167 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43483-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:42016 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data6/current/BP-1973182020-172.31.14.131-1689286574453 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43583-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp442358688-2323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18-prefix:jenkins-hbase4.apache.org,43005,1689286575655.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data5/current/BP-1973182020-172.31.14.131-1689286574453 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3e458899-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp442358688-2318 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x5bbc9bb4-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18-prefix:jenkins-hbase4.apache.org,34769,1689286575499 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6ba75fe2[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@490f72cd sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@613bea02 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120492832-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x50f9ce59-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 40407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:44513 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:43583Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data1/current/BP-1973182020-172.31.14.131-1689286574453 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1832481523-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-681431082_17 at /127.0.0.1:46674 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x6e46f61a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@3a24f2dc java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 566453113@qtp-1339636449-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:42026 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x5bbc9bb4-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 35819 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1832481523-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 40407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1380752573@qtp-1310799732-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 0 on default port 40407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 43483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:46708 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:39088 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp442358688-2320-acceptor-0@28d48e48-ServerConnector@149a369b{HTTP/1.1, (http/1.1)}{0.0.0.0:35305} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120492832-2305 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/43483.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x18a9fda6-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData-prefix:jenkins-hbase4.apache.org,45763,1689286575307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x22ca3124 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp442358688-2322 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-681431082_17 at /127.0.0.1:41924 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40407 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x50f9ce59 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp442358688-2319 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-56577941_17 at /127.0.0.1:41968 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data2/current/BP-1973182020-172.31.14.131-1689286574453 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-681431082_17 at /127.0.0.1:42002 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:34769-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:44513 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x3e458899-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1454254475-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x0cbcd600-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x22ca3124-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:43005-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45763,1689286575307 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Listener at localhost/43483-SendThread(127.0.0.1:63373) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_487326040_17 at /127.0.0.1:46682 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 34055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1490576280-2582 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-fa65c13-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1320284695_17 at /127.0.0.1:39074 [Receiving block BP-1973182020-172.31.14.131-1689286574453:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18-prefix:jenkins-hbase4.apache.org,43005,1689286575655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x0cbcd600-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1490576280-2583-acceptor-0@4ea41aed-ServerConnector@8e66af4{HTTP/1.1, (http/1.1)}{0.0.0.0:40567} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp369896903-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2141173808@qtp-1310799732-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38511 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp2120492832-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@35f552ae sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:40407 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@32ead736 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:44513 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1490576280-2587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3a9a2339-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1351541573) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp1454254475-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3699dcf7-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1973182020-172.31.14.131-1689286574453:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:44513 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x00f977bc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1490576280-2584 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286576093 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp442358688-2317 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63373@0x0cbcd600 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2026061141.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1832481523-2246-acceptor-0@62d2c135-ServerConnector@63684f13{HTTP/1.1, (http/1.1)}{0.0.0.0:44583} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:35629-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:45763 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1832481523-2245 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1959294063.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43483-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=835 (was 799) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=384 (was 381) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3648 (was 3787) 2023-07-13 22:16:17,598 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-13 22:16:17,616 INFO [Listener at localhost/43483] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=557, OpenFileDescriptor=834, MaxFileDescriptor=60000, SystemLoadAverage=384, ProcessCount=172, AvailableMemoryMB=3646 2023-07-13 22:16:17,617 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-13 22:16:17,617 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-13 22:16:17,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:17,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:17,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:17,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:17,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:17,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:17,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:17,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:17,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:17,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:17,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:17,631 INFO [RS:3;jenkins-hbase4:43583] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43583%2C1689286577318, suffix=, logDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43583,1689286577318, archiveDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs, maxLogs=32 2023-07-13 22:16:17,632 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:17,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:17,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:17,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:17,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:17,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:17,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:17,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:17,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:17,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:17,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50426 deadline: 1689287777642, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:17,643 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:17,645 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:17,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:17,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:17,647 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:17,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:17,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:17,654 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK] 2023-07-13 22:16:17,654 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK] 2023-07-13 22:16:17,661 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK] 2023-07-13 22:16:17,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:17,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-13 22:16:17,663 INFO [RS:3;jenkins-hbase4:43583] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/WALs/jenkins-hbase4.apache.org,43583,1689286577318/jenkins-hbase4.apache.org%2C43583%2C1689286577318.1689286577632 2023-07-13 22:16:17,664 DEBUG [RS:3;jenkins-hbase4:43583] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44189,DS-24242fd8-358a-4187-a1f5-9a5588ed2305,DISK], DatanodeInfoWithStorage[127.0.0.1:40463,DS-1bf525b9-b68f-4988-8bfa-c72e41a897a7,DISK], DatanodeInfoWithStorage[127.0.0.1:36163,DS-ede30c6f-2fff-47aa-84a6-2d6d112749f2,DISK]] 2023-07-13 22:16:17,664 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:17,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-13 22:16:17,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 22:16:17,666 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:17,667 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:17,667 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:17,669 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 22:16:17,670 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:17,671 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406 empty. 2023-07-13 22:16:17,672 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:17,672 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-13 22:16:17,684 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-13 22:16:17,685 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 92e347adab175089c9eea30c4e53c406, NAME => 't1,,1689286577662.92e347adab175089c9eea30c4e53c406.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp 2023-07-13 22:16:17,697 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689286577662.92e347adab175089c9eea30c4e53c406.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:17,698 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 92e347adab175089c9eea30c4e53c406, disabling compactions & flushes 2023-07-13 22:16:17,698 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:17,698 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:17,698 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689286577662.92e347adab175089c9eea30c4e53c406. after waiting 0 ms 2023-07-13 22:16:17,698 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:17,698 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:17,698 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 92e347adab175089c9eea30c4e53c406: 2023-07-13 22:16:17,700 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 22:16:17,701 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689286577662.92e347adab175089c9eea30c4e53c406.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286577700"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286577700"}]},"ts":"1689286577700"} 2023-07-13 22:16:17,702 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 22:16:17,703 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 22:16:17,703 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286577703"}]},"ts":"1689286577703"} 2023-07-13 22:16:17,704 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-13 22:16:17,708 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 22:16:17,708 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 22:16:17,708 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 22:16:17,708 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 22:16:17,708 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 22:16:17,708 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 22:16:17,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, ASSIGN}] 2023-07-13 22:16:17,709 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, ASSIGN 2023-07-13 22:16:17,711 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35629,1689286575808; forceNewPlan=false, retain=false 2023-07-13 22:16:17,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 22:16:17,861 INFO [jenkins-hbase4:45763] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 22:16:17,862 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=92e347adab175089c9eea30c4e53c406, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:17,863 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689286577662.92e347adab175089c9eea30c4e53c406.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286577862"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286577862"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286577862"}]},"ts":"1689286577862"} 2023-07-13 22:16:17,865 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 92e347adab175089c9eea30c4e53c406, server=jenkins-hbase4.apache.org,35629,1689286575808}] 2023-07-13 22:16:17,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 22:16:18,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 92e347adab175089c9eea30c4e53c406, NAME => 't1,,1689286577662.92e347adab175089c9eea30c4e53c406.', STARTKEY => '', ENDKEY => ''} 2023-07-13 22:16:18,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689286577662.92e347adab175089c9eea30c4e53c406.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 22:16:18,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,021 INFO [StoreOpener-92e347adab175089c9eea30c4e53c406-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,023 DEBUG [StoreOpener-92e347adab175089c9eea30c4e53c406-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/default/t1/92e347adab175089c9eea30c4e53c406/cf1 2023-07-13 22:16:18,023 DEBUG [StoreOpener-92e347adab175089c9eea30c4e53c406-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/default/t1/92e347adab175089c9eea30c4e53c406/cf1 2023-07-13 22:16:18,023 INFO [StoreOpener-92e347adab175089c9eea30c4e53c406-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 92e347adab175089c9eea30c4e53c406 columnFamilyName cf1 2023-07-13 22:16:18,024 INFO [StoreOpener-92e347adab175089c9eea30c4e53c406-1] regionserver.HStore(310): Store=92e347adab175089c9eea30c4e53c406/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 22:16:18,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/default/t1/92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/default/t1/92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/default/t1/92e347adab175089c9eea30c4e53c406/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 22:16:18,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 92e347adab175089c9eea30c4e53c406; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10995056800, jitterRate=0.023994460701942444}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 22:16:18,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 92e347adab175089c9eea30c4e53c406: 2023-07-13 22:16:18,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689286577662.92e347adab175089c9eea30c4e53c406., pid=14, masterSystemTime=1689286578016 2023-07-13 22:16:18,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,033 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=92e347adab175089c9eea30c4e53c406, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:18,034 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689286577662.92e347adab175089c9eea30c4e53c406.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286578033"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689286578033"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689286578033"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689286578033"}]},"ts":"1689286578033"} 2023-07-13 22:16:18,036 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-13 22:16:18,036 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 92e347adab175089c9eea30c4e53c406, server=jenkins-hbase4.apache.org,35629,1689286575808 in 170 msec 2023-07-13 22:16:18,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 22:16:18,038 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, ASSIGN in 328 msec 2023-07-13 22:16:18,038 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 22:16:18,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286578038"}]},"ts":"1689286578038"} 2023-07-13 22:16:18,039 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-13 22:16:18,041 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 22:16:18,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 379 msec 2023-07-13 22:16:18,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 22:16:18,269 INFO [Listener at localhost/43483] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-13 22:16:18,269 DEBUG [Listener at localhost/43483] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-13 22:16:18,269 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,271 INFO [Listener at localhost/43483] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-13 22:16:18,271 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,271 INFO [Listener at localhost/43483] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-13 22:16:18,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 22:16:18,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-13 22:16:18,275 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 22:16:18,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-13 22:16:18,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:50426 deadline: 1689286638272, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-13 22:16:18,278 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-13 22:16:18,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:18,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:18,379 INFO [Listener at localhost/43483] client.HBaseAdmin$15(890): Started disable of t1 2023-07-13 22:16:18,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-13 22:16:18,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-13 22:16:18,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 22:16:18,383 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286578383"}]},"ts":"1689286578383"} 2023-07-13 22:16:18,384 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-13 22:16:18,386 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-13 22:16:18,386 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, UNASSIGN}] 2023-07-13 22:16:18,387 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, UNASSIGN 2023-07-13 22:16:18,388 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=92e347adab175089c9eea30c4e53c406, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:18,388 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689286577662.92e347adab175089c9eea30c4e53c406.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286578388"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689286578388"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689286578388"}]},"ts":"1689286578388"} 2023-07-13 22:16:18,389 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 92e347adab175089c9eea30c4e53c406, server=jenkins-hbase4.apache.org,35629,1689286575808}] 2023-07-13 22:16:18,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 22:16:18,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 92e347adab175089c9eea30c4e53c406, disabling compactions & flushes 2023-07-13 22:16:18,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689286577662.92e347adab175089c9eea30c4e53c406. after waiting 0 ms 2023-07-13 22:16:18,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/default/t1/92e347adab175089c9eea30c4e53c406/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 22:16:18,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689286577662.92e347adab175089c9eea30c4e53c406. 2023-07-13 22:16:18,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 92e347adab175089c9eea30c4e53c406: 2023-07-13 22:16:18,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,550 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=92e347adab175089c9eea30c4e53c406, regionState=CLOSED 2023-07-13 22:16:18,551 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689286577662.92e347adab175089c9eea30c4e53c406.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689286578550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689286578550"}]},"ts":"1689286578550"} 2023-07-13 22:16:18,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 22:16:18,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 92e347adab175089c9eea30c4e53c406, server=jenkins-hbase4.apache.org,35629,1689286575808 in 163 msec 2023-07-13 22:16:18,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-13 22:16:18,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=92e347adab175089c9eea30c4e53c406, UNASSIGN in 168 msec 2023-07-13 22:16:18,556 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689286578556"}]},"ts":"1689286578556"} 2023-07-13 22:16:18,557 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-13 22:16:18,559 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-13 22:16:18,560 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-13 22:16:18,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 22:16:18,685 INFO [Listener at localhost/43483] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-13 22:16:18,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-13 22:16:18,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-13 22:16:18,689 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-13 22:16:18,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-13 22:16:18,689 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-13 22:16:18,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:18,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:18,693 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 22:16:18,695 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406/cf1, FileablePath, hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406/recovered.edits] 2023-07-13 22:16:18,700 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406/recovered.edits/4.seqid to hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/archive/data/default/t1/92e347adab175089c9eea30c4e53c406/recovered.edits/4.seqid 2023-07-13 22:16:18,700 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/.tmp/data/default/t1/92e347adab175089c9eea30c4e53c406 2023-07-13 22:16:18,700 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-13 22:16:18,703 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-13 22:16:18,705 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-13 22:16:18,706 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-13 22:16:18,707 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-13 22:16:18,707 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-13 22:16:18,707 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689286577662.92e347adab175089c9eea30c4e53c406.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689286578707"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:18,709 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 22:16:18,709 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 92e347adab175089c9eea30c4e53c406, NAME => 't1,,1689286577662.92e347adab175089c9eea30c4e53c406.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 22:16:18,709 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-13 22:16:18,709 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689286578709"}]},"ts":"9223372036854775807"} 2023-07-13 22:16:18,710 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-13 22:16:18,713 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-13 22:16:18,719 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 27 msec 2023-07-13 22:16:18,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 22:16:18,795 INFO [Listener at localhost/43483] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-13 22:16:18,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:18,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:18,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:18,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:18,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:18,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:18,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:18,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:18,812 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:18,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:18,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:18,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:18,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:18,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:18,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:18,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50426 deadline: 1689287778823, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:18,824 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:18,828 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,829 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:18,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:18,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:18,852 INFO [Listener at localhost/43483] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567 (was 557) - Thread LEAK? -, OpenFileDescriptor=843 (was 834) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=384 (was 384), ProcessCount=172 (was 172), AvailableMemoryMB=3631 (was 3646) 2023-07-13 22:16:18,852 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-13 22:16:18,870 INFO [Listener at localhost/43483] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=567, OpenFileDescriptor=843, MaxFileDescriptor=60000, SystemLoadAverage=384, ProcessCount=172, AvailableMemoryMB=3629 2023-07-13 22:16:18,870 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-13 22:16:18,870 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-13 22:16:18,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:18,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:18,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:18,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:18,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:18,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:18,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:18,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:18,883 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:18,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:18,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:18,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:18,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:18,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:18,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:18,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50426 deadline: 1689287778892, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:18,892 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:18,894 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,895 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:18,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:18,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:18,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-13 22:16:18,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:18,897 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-13 22:16:18,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-13 22:16:18,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 22:16:18,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:18,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:18,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:18,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:18,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:18,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:18,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:18,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:18,913 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:18,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:18,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:18,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:18,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:18,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:18,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:18,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50426 deadline: 1689287778922, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:18,923 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:18,925 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,926 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:18,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:18,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:18,948 INFO [Listener at localhost/43483] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569 (was 567) - Thread LEAK? -, OpenFileDescriptor=843 (was 843), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=384 (was 384), ProcessCount=172 (was 172), AvailableMemoryMB=3629 (was 3629) 2023-07-13 22:16:18,948 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-13 22:16:18,966 INFO [Listener at localhost/43483] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569, OpenFileDescriptor=843, MaxFileDescriptor=60000, SystemLoadAverage=384, ProcessCount=172, AvailableMemoryMB=3629 2023-07-13 22:16:18,966 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-13 22:16:18,966 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-13 22:16:18,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:18,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:18,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:18,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:18,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:18,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:18,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:18,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:18,979 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:18,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:18,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:18,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:18,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:18,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:18,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:18,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:18,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50426 deadline: 1689287778988, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:18,989 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:18,991 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:18,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,992 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:18,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:18,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:18,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:18,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:18,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:18,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:18,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:18,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:18,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:18,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:19,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:19,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:19,010 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:19,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:19,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:19,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:19,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:19,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:19,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50426 deadline: 1689287779018, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:19,019 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:19,021 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:19,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,022 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:19,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:19,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:19,046 INFO [Listener at localhost/43483] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=570 (was 569) - Thread LEAK? -, OpenFileDescriptor=843 (was 843), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=384 (was 384), ProcessCount=172 (was 172), AvailableMemoryMB=3624 (was 3629) 2023-07-13 22:16:19,047 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-13 22:16:19,070 INFO [Listener at localhost/43483] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570, OpenFileDescriptor=843, MaxFileDescriptor=60000, SystemLoadAverage=384, ProcessCount=172, AvailableMemoryMB=3622 2023-07-13 22:16:19,070 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-13 22:16:19,070 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-13 22:16:19,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:19,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:19,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:19,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:19,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:19,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:19,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:19,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:19,083 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:19,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:19,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:19,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:19,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:19,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:19,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50426 deadline: 1689287779093, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:19,093 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:19,095 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:19,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,096 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:19,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:19,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:19,097 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-13 22:16:19,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-13 22:16:19,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-13 22:16:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 22:16:19,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:19,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-13 22:16:19,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 22:16:19,116 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:19,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-13 22:16:19,193 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 22:16:19,193 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-13 22:16:19,193 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:19,194 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-13 22:16:19,194 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 22:16:19,194 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-13 22:16:19,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 22:16:19,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-13 22:16:19,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:19,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:50426 deadline: 1689287779213, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-13 22:16:19,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-13 22:16:19,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 22:16:19,234 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 22:16:19,235 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-13 22:16:19,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 22:16:19,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-13 22:16:19,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-13 22:16:19,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-13 22:16:19,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 22:16:19,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:19,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-13 22:16:19,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,350 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,352 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-13 22:16:19,353 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,354 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 22:16:19,354 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 22:16:19,355 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,356 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 22:16:19,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-13 22:16:19,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-13 22:16:19,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-13 22:16:19,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-13 22:16:19,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 22:16:19,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:19,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:19,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:50426 deadline: 1689286639464, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-13 22:16:19,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:19,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:19,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:19,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:19,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:19,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-13 22:16:19,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 22:16:19,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:19,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 22:16:19,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 22:16:19,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 22:16:19,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 22:16:19,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 22:16:19,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 22:16:19,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 22:16:19,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 22:16:19,482 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 22:16:19,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 22:16:19,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 22:16:19,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 22:16:19,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 22:16:19,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 22:16:19,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45763] to rsgroup master 2023-07-13 22:16:19,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 22:16:19,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50426 deadline: 1689287779491, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. 2023-07-13 22:16:19,492 WARN [Listener at localhost/43483] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45763 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 22:16:19,494 INFO [Listener at localhost/43483] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 22:16:19,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 22:16:19,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 22:16:19,495 INFO [Listener at localhost/43483] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34769, jenkins-hbase4.apache.org:35629, jenkins-hbase4.apache.org:43005, jenkins-hbase4.apache.org:43583], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 22:16:19,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 22:16:19,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45763] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 22:16:19,513 INFO [Listener at localhost/43483] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570 (was 570), OpenFileDescriptor=843 (was 843), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=384 (was 384), ProcessCount=172 (was 172), AvailableMemoryMB=3621 (was 3622) 2023-07-13 22:16:19,513 WARN [Listener at localhost/43483] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-13 22:16:19,513 INFO [Listener at localhost/43483] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 22:16:19,513 INFO [Listener at localhost/43483] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 22:16:19,513 DEBUG [Listener at localhost/43483] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x50f9ce59 to 127.0.0.1:63373 2023-07-13 22:16:19,513 DEBUG [Listener at localhost/43483] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,514 DEBUG [Listener at localhost/43483] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 22:16:19,514 DEBUG [Listener at localhost/43483] util.JVMClusterUtil(257): Found active master hash=1284469120, stopped=false 2023-07-13 22:16:19,514 DEBUG [Listener at localhost/43483] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 22:16:19,514 DEBUG [Listener at localhost/43483] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 22:16:19,514 INFO [Listener at localhost/43483] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:19,515 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:19,515 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:19,515 INFO [Listener at localhost/43483] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 22:16:19,515 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:19,515 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:19,515 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 22:16:19,516 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:19,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:19,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:19,516 DEBUG [Listener at localhost/43483] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00f977bc to 127.0.0.1:63373 2023-07-13 22:16:19,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:19,516 DEBUG [Listener at localhost/43483] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:19,516 INFO [Listener at localhost/43483] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34769,1689286575499' ***** 2023-07-13 22:16:19,516 INFO [Listener at localhost/43483] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:19,516 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:19,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 22:16:19,518 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1064): Closing user regions 2023-07-13 22:16:19,518 INFO [Listener at localhost/43483] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43005,1689286575655' ***** 2023-07-13 22:16:19,518 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(3305): Received CLOSE for e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:19,518 INFO [Listener at localhost/43483] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:19,520 INFO [RS:0;jenkins-hbase4:34769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@138d4621{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:19,520 INFO [Listener at localhost/43483] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35629,1689286575808' ***** 2023-07-13 22:16:19,520 INFO [Listener at localhost/43483] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:19,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e82ef2be6abd0b532d876efa9e2a9c31, disabling compactions & flushes 2023-07-13 22:16:19,520 INFO [Listener at localhost/43483] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43583,1689286577318' ***** 2023-07-13 22:16:19,520 INFO [Listener at localhost/43483] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 22:16:19,520 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:19,520 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:19,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:19,520 INFO [RS:0;jenkins-hbase4:34769] server.AbstractConnector(383): Stopped ServerConnector@63684f13{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:19,521 INFO [RS:0;jenkins-hbase4:34769] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:19,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:19,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. after waiting 0 ms 2023-07-13 22:16:19,523 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:19,523 INFO [RS:0;jenkins-hbase4:34769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2e0aae4b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:19,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:19,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e82ef2be6abd0b532d876efa9e2a9c31 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-13 22:16:19,524 INFO [RS:0;jenkins-hbase4:34769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@77aa7250{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:19,526 INFO [RS:3;jenkins-hbase4:43583] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@783f1966{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:19,526 INFO [RS:1;jenkins-hbase4:43005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@26105268{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:19,527 INFO [RS:1;jenkins-hbase4:43005] server.AbstractConnector(383): Stopped ServerConnector@30e74a47{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:19,527 INFO [RS:0;jenkins-hbase4:34769] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:19,527 INFO [RS:1;jenkins-hbase4:43005] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:19,527 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:19,527 INFO [RS:0;jenkins-hbase4:34769] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:19,527 INFO [RS:0;jenkins-hbase4:34769] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:19,528 INFO [RS:1;jenkins-hbase4:43005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@60c861b8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:19,528 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:19,528 INFO [RS:3;jenkins-hbase4:43583] server.AbstractConnector(383): Stopped ServerConnector@8e66af4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:19,529 INFO [RS:1;jenkins-hbase4:43005] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@13211c10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:19,529 DEBUG [RS:0;jenkins-hbase4:34769] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x22ca3124 to 127.0.0.1:63373 2023-07-13 22:16:19,529 INFO [RS:3;jenkins-hbase4:43583] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:19,529 INFO [RS:2;jenkins-hbase4:35629] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3dc1dbf8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 22:16:19,529 DEBUG [RS:0;jenkins-hbase4:34769] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,529 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34769,1689286575499; all regions closed. 2023-07-13 22:16:19,530 INFO [RS:3;jenkins-hbase4:43583] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@336425c2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:19,531 INFO [RS:3;jenkins-hbase4:43583] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5526afb3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:19,531 INFO [RS:1;jenkins-hbase4:43005] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:19,531 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:19,531 INFO [RS:2;jenkins-hbase4:35629] server.AbstractConnector(383): Stopped ServerConnector@7df6aaec{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:19,531 INFO [RS:1;jenkins-hbase4:43005] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:19,531 INFO [RS:2;jenkins-hbase4:35629] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:19,531 INFO [RS:1;jenkins-hbase4:43005] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:19,532 INFO [RS:3;jenkins-hbase4:43583] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:19,532 INFO [RS:2;jenkins-hbase4:35629] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6bdfaecf{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:19,532 INFO [RS:3;jenkins-hbase4:43583] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:19,533 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:19,532 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(3305): Received CLOSE for 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:19,533 INFO [RS:2;jenkins-hbase4:35629] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@44c46cf5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:19,533 INFO [RS:3;jenkins-hbase4:43583] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:19,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2fdbc50ddfe83bbee146394d8c1f3c34, disabling compactions & flushes 2023-07-13 22:16:19,533 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:19,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:19,534 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:19,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:19,534 DEBUG [RS:1;jenkins-hbase4:43005] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5bbc9bb4 to 127.0.0.1:63373 2023-07-13 22:16:19,534 DEBUG [RS:1;jenkins-hbase4:43005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. after waiting 0 ms 2023-07-13 22:16:19,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:19,534 DEBUG [RS:3;jenkins-hbase4:43583] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18a9fda6 to 127.0.0.1:63373 2023-07-13 22:16:19,534 DEBUG [RS:3;jenkins-hbase4:43583] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,534 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43583,1689286577318; all regions closed. 2023-07-13 22:16:19,534 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,535 INFO [RS:2;jenkins-hbase4:35629] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 22:16:19,535 INFO [RS:2;jenkins-hbase4:35629] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 22:16:19,535 INFO [RS:2;jenkins-hbase4:35629] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 22:16:19,535 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:19,535 DEBUG [RS:2;jenkins-hbase4:35629] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e46f61a to 127.0.0.1:63373 2023-07-13 22:16:19,535 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 22:16:19,535 DEBUG [RS:2;jenkins-hbase4:35629] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,535 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 22:16:19,535 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1478): Online Regions={e82ef2be6abd0b532d876efa9e2a9c31=hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31.} 2023-07-13 22:16:19,535 DEBUG [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1504): Waiting on e82ef2be6abd0b532d876efa9e2a9c31 2023-07-13 22:16:19,534 INFO [RS:1;jenkins-hbase4:43005] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:19,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2fdbc50ddfe83bbee146394d8c1f3c34 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-13 22:16:19,536 INFO [RS:1;jenkins-hbase4:43005] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:19,536 INFO [RS:1;jenkins-hbase4:43005] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:19,536 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 22:16:19,543 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-13 22:16:19,545 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 2fdbc50ddfe83bbee146394d8c1f3c34=hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34.} 2023-07-13 22:16:19,544 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 22:16:19,545 DEBUG [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1504): Waiting on 1588230740, 2fdbc50ddfe83bbee146394d8c1f3c34 2023-07-13 22:16:19,545 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 22:16:19,545 DEBUG [RS:0;jenkins-hbase4:34769] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs 2023-07-13 22:16:19,545 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 22:16:19,545 INFO [RS:0;jenkins-hbase4:34769] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34769%2C1689286575499:(num 1689286576464) 2023-07-13 22:16:19,545 DEBUG [RS:0;jenkins-hbase4:34769] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,546 INFO [RS:0;jenkins-hbase4:34769] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,545 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 22:16:19,546 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 22:16:19,546 INFO [RS:0;jenkins-hbase4:34769] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:19,546 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-13 22:16:19,546 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:19,546 INFO [RS:0;jenkins-hbase4:34769] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:19,546 INFO [RS:0;jenkins-hbase4:34769] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:19,546 INFO [RS:0;jenkins-hbase4:34769] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:19,547 INFO [RS:0;jenkins-hbase4:34769] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34769 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34769,1689286575499 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,552 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,552 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34769,1689286575499] 2023-07-13 22:16:19,552 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34769,1689286575499; numProcessing=1 2023-07-13 22:16:19,553 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34769,1689286575499 already deleted, retry=false 2023-07-13 22:16:19,553 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34769,1689286575499 expired; onlineServers=3 2023-07-13 22:16:19,563 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-13 22:16:19,563 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-13 22:16:19,566 DEBUG [RS:3;jenkins-hbase4:43583] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs 2023-07-13 22:16:19,566 INFO [RS:3;jenkins-hbase4:43583] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43583%2C1689286577318:(num 1689286577632) 2023-07-13 22:16:19,566 DEBUG [RS:3;jenkins-hbase4:43583] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,567 INFO [RS:3;jenkins-hbase4:43583] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,594 INFO [RS:3;jenkins-hbase4:43583] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:19,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/.tmp/info/66acfb96a43145e399be9cb711f94e4a 2023-07-13 22:16:19,595 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:19,595 INFO [RS:3;jenkins-hbase4:43583] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:19,595 INFO [RS:3;jenkins-hbase4:43583] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:19,595 INFO [RS:3;jenkins-hbase4:43583] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:19,596 INFO [RS:3;jenkins-hbase4:43583] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43583 2023-07-13 22:16:19,598 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:19,598 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:19,598 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43583,1689286577318 2023-07-13 22:16:19,598 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,600 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43583,1689286577318] 2023-07-13 22:16:19,600 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43583,1689286577318; numProcessing=2 2023-07-13 22:16:19,602 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43583,1689286577318 already deleted, retry=false 2023-07-13 22:16:19,602 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43583,1689286577318 expired; onlineServers=2 2023-07-13 22:16:19,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/.tmp/m/be6ea2d6d63e46f88186b12852cb0fe8 2023-07-13 22:16:19,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66acfb96a43145e399be9cb711f94e4a 2023-07-13 22:16:19,607 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/.tmp/info/fda53fc54d84470cb93209aed5ff4610 2023-07-13 22:16:19,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/.tmp/info/66acfb96a43145e399be9cb711f94e4a as hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/info/66acfb96a43145e399be9cb711f94e4a 2023-07-13 22:16:19,610 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for be6ea2d6d63e46f88186b12852cb0fe8 2023-07-13 22:16:19,615 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/.tmp/m/be6ea2d6d63e46f88186b12852cb0fe8 as hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/m/be6ea2d6d63e46f88186b12852cb0fe8 2023-07-13 22:16:19,616 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fda53fc54d84470cb93209aed5ff4610 2023-07-13 22:16:19,618 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66acfb96a43145e399be9cb711f94e4a 2023-07-13 22:16:19,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/info/66acfb96a43145e399be9cb711f94e4a, entries=3, sequenceid=9, filesize=5.0 K 2023-07-13 22:16:19,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for e82ef2be6abd0b532d876efa9e2a9c31 in 96ms, sequenceid=9, compaction requested=false 2023-07-13 22:16:19,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for be6ea2d6d63e46f88186b12852cb0fe8 2023-07-13 22:16:19,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/m/be6ea2d6d63e46f88186b12852cb0fe8, entries=12, sequenceid=29, filesize=5.4 K 2023-07-13 22:16:19,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 2fdbc50ddfe83bbee146394d8c1f3c34 in 96ms, sequenceid=29, compaction requested=false 2023-07-13 22:16:19,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/namespace/e82ef2be6abd0b532d876efa9e2a9c31/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-13 22:16:19,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:19,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e82ef2be6abd0b532d876efa9e2a9c31: 2023-07-13 22:16:19,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689286576557.e82ef2be6abd0b532d876efa9e2a9c31. 2023-07-13 22:16:19,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/rsgroup/2fdbc50ddfe83bbee146394d8c1f3c34/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-13 22:16:19,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:19,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:19,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2fdbc50ddfe83bbee146394d8c1f3c34: 2023-07-13 22:16:19,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689286576690.2fdbc50ddfe83bbee146394d8c1f3c34. 2023-07-13 22:16:19,636 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/.tmp/rep_barrier/6eb975d55bfa4fb9af0a3a473e7aa94b 2023-07-13 22:16:19,641 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6eb975d55bfa4fb9af0a3a473e7aa94b 2023-07-13 22:16:19,653 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/.tmp/table/108aabf827404277b24b823846b51507 2023-07-13 22:16:19,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 108aabf827404277b24b823846b51507 2023-07-13 22:16:19,659 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/.tmp/info/fda53fc54d84470cb93209aed5ff4610 as hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/info/fda53fc54d84470cb93209aed5ff4610 2023-07-13 22:16:19,665 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fda53fc54d84470cb93209aed5ff4610 2023-07-13 22:16:19,665 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/info/fda53fc54d84470cb93209aed5ff4610, entries=22, sequenceid=26, filesize=7.3 K 2023-07-13 22:16:19,665 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/.tmp/rep_barrier/6eb975d55bfa4fb9af0a3a473e7aa94b as hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/rep_barrier/6eb975d55bfa4fb9af0a3a473e7aa94b 2023-07-13 22:16:19,671 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6eb975d55bfa4fb9af0a3a473e7aa94b 2023-07-13 22:16:19,671 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/rep_barrier/6eb975d55bfa4fb9af0a3a473e7aa94b, entries=1, sequenceid=26, filesize=4.9 K 2023-07-13 22:16:19,672 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/.tmp/table/108aabf827404277b24b823846b51507 as hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/table/108aabf827404277b24b823846b51507 2023-07-13 22:16:19,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 108aabf827404277b24b823846b51507 2023-07-13 22:16:19,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/table/108aabf827404277b24b823846b51507, entries=6, sequenceid=26, filesize=5.1 K 2023-07-13 22:16:19,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 133ms, sequenceid=26, compaction requested=false 2023-07-13 22:16:19,690 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-13 22:16:19,691 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 22:16:19,692 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:19,692 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 22:16:19,692 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 22:16:19,716 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:19,716 INFO [RS:3;jenkins-hbase4:43583] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43583,1689286577318; zookeeper connection closed. 2023-07-13 22:16:19,716 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43583-0x10160c207bc000b, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:19,717 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@947d06] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@947d06 2023-07-13 22:16:19,736 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35629,1689286575808; all regions closed. 2023-07-13 22:16:19,740 DEBUG [RS:2;jenkins-hbase4:35629] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs 2023-07-13 22:16:19,740 INFO [RS:2;jenkins-hbase4:35629] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35629%2C1689286575808:(num 1689286576463) 2023-07-13 22:16:19,740 DEBUG [RS:2;jenkins-hbase4:35629] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,740 INFO [RS:2;jenkins-hbase4:35629] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,741 INFO [RS:2;jenkins-hbase4:35629] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:19,741 INFO [RS:2;jenkins-hbase4:35629] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 22:16:19,741 INFO [RS:2;jenkins-hbase4:35629] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 22:16:19,741 INFO [RS:2;jenkins-hbase4:35629] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 22:16:19,741 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:19,742 INFO [RS:2;jenkins-hbase4:35629] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35629 2023-07-13 22:16:19,743 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:19,743 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35629,1689286575808 2023-07-13 22:16:19,743 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,745 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35629,1689286575808] 2023-07-13 22:16:19,745 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35629,1689286575808; numProcessing=3 2023-07-13 22:16:19,745 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43005,1689286575655; all regions closed. 2023-07-13 22:16:19,746 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35629,1689286575808 already deleted, retry=false 2023-07-13 22:16:19,746 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35629,1689286575808 expired; onlineServers=1 2023-07-13 22:16:19,750 DEBUG [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs 2023-07-13 22:16:19,750 INFO [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43005%2C1689286575655.meta:.meta(num 1689286576500) 2023-07-13 22:16:19,754 DEBUG [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/oldWALs 2023-07-13 22:16:19,754 INFO [RS:1;jenkins-hbase4:43005] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43005%2C1689286575655:(num 1689286576463) 2023-07-13 22:16:19,754 DEBUG [RS:1;jenkins-hbase4:43005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,754 INFO [RS:1;jenkins-hbase4:43005] regionserver.LeaseManager(133): Closed leases 2023-07-13 22:16:19,755 INFO [RS:1;jenkins-hbase4:43005] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 22:16:19,755 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:19,756 INFO [RS:1;jenkins-hbase4:43005] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43005 2023-07-13 22:16:19,757 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43005,1689286575655 2023-07-13 22:16:19,757 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 22:16:19,758 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43005,1689286575655] 2023-07-13 22:16:19,758 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43005,1689286575655; numProcessing=4 2023-07-13 22:16:19,759 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43005,1689286575655 already deleted, retry=false 2023-07-13 22:16:19,760 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43005,1689286575655 expired; onlineServers=0 2023-07-13 22:16:19,760 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45763,1689286575307' ***** 2023-07-13 22:16:19,760 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 22:16:19,760 DEBUG [M:0;jenkins-hbase4:45763] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48b0fcc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 22:16:19,760 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 22:16:19,763 INFO [M:0;jenkins-hbase4:45763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@34d64535{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 22:16:19,763 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 22:16:19,763 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 22:16:19,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 22:16:19,763 INFO [M:0;jenkins-hbase4:45763] server.AbstractConnector(383): Stopped ServerConnector@74189b3b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:19,763 INFO [M:0;jenkins-hbase4:45763] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 22:16:19,764 INFO [M:0;jenkins-hbase4:45763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69e30e67{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 22:16:19,765 INFO [M:0;jenkins-hbase4:45763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b59d690{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/hadoop.log.dir/,STOPPED} 2023-07-13 22:16:19,765 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45763,1689286575307 2023-07-13 22:16:19,765 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45763,1689286575307; all regions closed. 2023-07-13 22:16:19,765 DEBUG [M:0;jenkins-hbase4:45763] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 22:16:19,765 INFO [M:0;jenkins-hbase4:45763] master.HMaster(1491): Stopping master jetty server 2023-07-13 22:16:19,766 INFO [M:0;jenkins-hbase4:45763] server.AbstractConnector(383): Stopped ServerConnector@149a369b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 22:16:19,766 DEBUG [M:0;jenkins-hbase4:45763] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 22:16:19,766 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 22:16:19,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286576093] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689286576093,5,FailOnTimeoutGroup] 2023-07-13 22:16:19,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286576093] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689286576093,5,FailOnTimeoutGroup] 2023-07-13 22:16:19,766 DEBUG [M:0;jenkins-hbase4:45763] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 22:16:19,766 INFO [M:0;jenkins-hbase4:45763] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 22:16:19,766 INFO [M:0;jenkins-hbase4:45763] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 22:16:19,766 INFO [M:0;jenkins-hbase4:45763] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-13 22:16:19,766 DEBUG [M:0;jenkins-hbase4:45763] master.HMaster(1512): Stopping service threads 2023-07-13 22:16:19,766 INFO [M:0;jenkins-hbase4:45763] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 22:16:19,767 ERROR [M:0;jenkins-hbase4:45763] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 22:16:19,767 INFO [M:0;jenkins-hbase4:45763] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 22:16:19,767 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 22:16:19,767 DEBUG [M:0;jenkins-hbase4:45763] zookeeper.ZKUtil(398): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 22:16:19,767 WARN [M:0;jenkins-hbase4:45763] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 22:16:19,767 INFO [M:0;jenkins-hbase4:45763] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 22:16:19,767 INFO [M:0;jenkins-hbase4:45763] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 22:16:19,767 DEBUG [M:0;jenkins-hbase4:45763] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 22:16:19,767 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:19,767 DEBUG [M:0;jenkins-hbase4:45763] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:19,767 DEBUG [M:0;jenkins-hbase4:45763] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 22:16:19,767 DEBUG [M:0;jenkins-hbase4:45763] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:19,768 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.23 KB heapSize=90.66 KB 2023-07-13 22:16:19,779 INFO [M:0;jenkins-hbase4:45763] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.23 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d69b9110934946b6b51a4d6e0b2f4524 2023-07-13 22:16:19,785 DEBUG [M:0;jenkins-hbase4:45763] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d69b9110934946b6b51a4d6e0b2f4524 as hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d69b9110934946b6b51a4d6e0b2f4524 2023-07-13 22:16:19,789 INFO [M:0;jenkins-hbase4:45763] regionserver.HStore(1080): Added hdfs://localhost:40407/user/jenkins/test-data/4c9c9422-37f4-1d9a-ccdf-5f8691395a18/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d69b9110934946b6b51a4d6e0b2f4524, entries=22, sequenceid=175, filesize=11.1 K 2023-07-13 22:16:19,790 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegion(2948): Finished flush of dataSize ~76.23 KB/78055, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=175, compaction requested=false 2023-07-13 22:16:19,791 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 22:16:19,791 DEBUG [M:0;jenkins-hbase4:45763] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 22:16:19,796 INFO [M:0;jenkins-hbase4:45763] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 22:16:19,797 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 22:16:19,797 INFO [M:0;jenkins-hbase4:45763] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45763 2023-07-13 22:16:19,798 DEBUG [M:0;jenkins-hbase4:45763] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45763,1689286575307 already deleted, retry=false 2023-07-13 22:16:19,816 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:19,816 INFO [RS:0;jenkins-hbase4:34769] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34769,1689286575499; zookeeper connection closed. 2023-07-13 22:16:19,817 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:34769-0x10160c207bc0001, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:19,817 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56bd2746] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56bd2746 2023-07-13 22:16:20,418 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:20,418 INFO [M:0;jenkins-hbase4:45763] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45763,1689286575307; zookeeper connection closed. 2023-07-13 22:16:20,418 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): master:45763-0x10160c207bc0000, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:20,518 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:20,518 INFO [RS:1;jenkins-hbase4:43005] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43005,1689286575655; zookeeper connection closed. 2023-07-13 22:16:20,518 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x10160c207bc0002, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:20,518 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3e18a08c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3e18a08c 2023-07-13 22:16:20,618 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:20,618 INFO [RS:2;jenkins-hbase4:35629] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35629,1689286575808; zookeeper connection closed. 2023-07-13 22:16:20,618 DEBUG [Listener at localhost/43483-EventThread] zookeeper.ZKWatcher(600): regionserver:35629-0x10160c207bc0003, quorum=127.0.0.1:63373, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 22:16:20,619 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@33fb28cf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@33fb28cf 2023-07-13 22:16:20,619 INFO [Listener at localhost/43483] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-13 22:16:20,619 WARN [Listener at localhost/43483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:20,623 INFO [Listener at localhost/43483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:20,726 WARN [BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:20,726 WARN [BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1973182020-172.31.14.131-1689286574453 (Datanode Uuid 2ab024b8-beb0-483e-8f8e-5f86e08a9f46) service to localhost/127.0.0.1:40407 2023-07-13 22:16:20,727 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data5/current/BP-1973182020-172.31.14.131-1689286574453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:20,727 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data6/current/BP-1973182020-172.31.14.131-1689286574453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:20,728 WARN [Listener at localhost/43483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:20,734 INFO [Listener at localhost/43483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:20,837 WARN [BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 22:16:20,837 WARN [BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1973182020-172.31.14.131-1689286574453 (Datanode Uuid c96fce37-487f-4275-8b94-56fd4fd426bc) service to localhost/127.0.0.1:40407 2023-07-13 22:16:20,838 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data3/current/BP-1973182020-172.31.14.131-1689286574453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:20,838 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data4/current/BP-1973182020-172.31.14.131-1689286574453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:20,839 WARN [Listener at localhost/43483] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 22:16:20,852 INFO [Listener at localhost/43483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:20,868 WARN [BP-1973182020-172.31.14.131-1689286574453 heartbeating to localhost/127.0.0.1:40407] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1973182020-172.31.14.131-1689286574453 (Datanode Uuid 323e90a0-9f34-4753-97a7-5085f0ae2659) service to localhost/127.0.0.1:40407 2023-07-13 22:16:20,869 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data1/current/BP-1973182020-172.31.14.131-1689286574453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:20,869 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d066d1e5-6766-4922-f0bb-a0ace337b5f0/cluster_b4ac565b-786f-5ed8-c4e0-06c74c91e990/dfs/data/data2/current/BP-1973182020-172.31.14.131-1689286574453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 22:16:20,966 INFO [Listener at localhost/43483] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 22:16:21,084 INFO [Listener at localhost/43483] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 22:16:21,110 INFO [Listener at localhost/43483] hbase.HBaseTestingUtility(1293): Minicluster is down